var/home/core/zuul-output/0000755000175000017500000000000015153607640014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015153616520015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000274273415153616436020303 0ustar corecoreikubelet.log_o[;r)Br'o -n(!9t%Cs7}g/غIs,r.k9Gfͅ Eڤ펯_ˎ6_o#oVݏKf핷ox[o8W5c% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ_oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{3CF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+7 s'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~BŊ6EZ|^߸3%L[EC 7gg/碓@e=Vn)h\\lwCzDiQJxTsL] ,=M`nͷ~Vܯ5a|X&pNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIo>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'P'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJBR_v'5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐarL8*O:j 2IN…D% Y;VE@}~;XI썋FqiƸ|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!fޙ-did˥]5]5᪩QJlyIPEQZȰ<'+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&Vsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$wMmKean=2c'Hr|ٚK׺PpB~p|*T%1TCz%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|Yt#A/| 62$IL7M[Y[q2xH8 JE NB au)T܂;S䤽|7,)CfHCH#IY]tNWA̕uF&Ix.Tpׯnn|ޞʚ[Ưy.xF%ڄPw5fc=f짩Q{rhbԉ]eH'm%=X |hIO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xazʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"=.E2IeD 2.ZdrN6Uœ=n8D-9޵KKw5ُJ,^Uꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮e&m^YAKC˴vҢ]+X`iDf7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)]?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{+rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9F{ Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!I˂&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1p?ֻ8k~|=y3ƙ/tц}.(D{/Yٷ/7cs=v{lLHqyXR iEػb*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/U>=&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}GgѨ@OĹCØǥӚif:Wg`!|e.nyl!; |* C<ˉvRSHr][/گ4n;[4E8#yrH9٧=vȫuqy/]c 2ėҾP'_dI_Bea /^wYY;'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^&Lq2E3,v1ɢ~__ٯ\-8!ד|$@D.[?d|9i,p?߼=܊Ce"!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d->'I`cj6~Wͽ %r\⴩'Mg,Kg>OIɲc{e34DY] 4q5J"2<zDƪ0lA,ƪ$2J5cUuaXu U!JƕȲ(J>66BcUJ&!\,z)V1j^MY՛VWq=I.?A!T$2V=1D̏V TZe0VXFM&H_i ƪW.{>Ǫ ͜׵ԊQ ,LFwc;U]A_C,X2.q< Q2=fLj>zL`Y(~ اE|5Ns$9yiay.)[1]Rh Sme! #xi|!K Ӗeqend}Y\9̰"z?6Ww;o|/+6J3h:ژc- P17 uZc 7u{YъgjL{%\f>'_yJiyrtzU#M60ﮃwx2 E`w'a/2 cWיd'LY/Ho2?̔/H'ogD-zI~S)!w>>aa(K40ԥZCYFhhy5noH'x ZRku P?2$^ r g)ә}sfι {?EhfNEYZPX ~"&1Lb1~ӾLqM@{標͓LkfLSvpJ"~`A#z&0+WrƕvѩP tKu;wt.|#<Ϥpegzbx=9Y b!Ky$uQ֗yyA+zP \\ NyxPr&YC÷<Ⱦa}M`N5 cRVε->Ѡx@Yk}SLM3|]"ݽ7Mh~/)HPi_ S_&I~ZK5 ȠvWO?~ 0p YO]8G&8'[eћ718 9~{Үq$@B doˋ$qE_}7|.z%SSѠn!tA׾aS9(>Ш{T'$b?tS]]Oqbtǧկy9. Կ!2 3+{g|{mm bEK^k&ovPP#Q=y/@LL}B;?whh`<}yQh$/lg磶"m\ÿw~jr$gp&mW vB6kOp2O+YI"y-w*[-up^cS,jҠ.l~9>oㇷ_eI:TZRI %apd[KɋY6vY~=$DU̫D dI7Ivh0vлgy*1,)S0oΆ&|^!yg7_O4!}w@/x/5X.e5Y#IE!56tU[Y! b6˃ YtƒM4GRMjD:hjĊ/zDR )< XQ(#$!*u~j`*Yh,-UjM 1=/a_[K"*S}C/:qJr[`&)V<wrfʲ-a`*Ϫ4-bwHY%ր0ȋ oeRZ+=lZi-gy)R0mM't8iUIZy[P.ylɫ+~7I$i늩&K#3\تZW L[M%/,гQ<'jlC1<.hZW-ù$RWhԢYxmgЈ3k;x-fy4"CLT[`65Hڪ5!j*.9` V:o<@Ѷ] -ks-B0oyNg [N[tiN]zI1Ҩz mjLbz}Er+J xڪݚ`Ojuk lӍZYu4%!eGh5Q%nI*MQ\hca[H-mRyVmuB*O MvVbbֱ Ӛ 8^9&%Ml+ҲH^2`6v±\yD[˫zxU<{LIͺwa "EiwvXn.߼jm;à6`_MTZScl}Uij8U5eBbnoO cE0U$ UH{KN hD&+@ 2ղm!8P "a'K 2?RoJEjUeW6u]Wocp MO L h6N[m- 2ONn7[+᫇>r<==:r!r UKa jk&Iz(t"t--WyUw4x`|'TK8s[=Kï[Z)֗ЖT8+kӴ-߷eG3#^䳈;麞%COw"2yl v[C!Kk84VYI{"Jxh$W,K3I ]aZK_r չec"8f*Ukݯ̰- j1걵nO[7=EA(;la>fx=Lޏ9q"(Bf6TP=j3PZcʾ|M۷^!=*xpLŐ~Te԰0C@&c]bw pپW@P> Mmf;/}/|f^@3  '}M 9p.9DP0vv_,>tv >ߴ Eo(Vqb 9p=]g<ŭ> *!c9hFpv.Ri ~h4i{( x{g%9`DD?hG (P7=xl-r͵=iKXk'W>N>U=w&_]'}!+ f7c 3ʇF෶(43s Njv`:dzg/ENsr=:UMnޮAgo2(vw<-Ox&h޵0mk鿂kI\oǵ:vg")D꒒'}H zؕ:fwDw༨wI2 $oٍ."NҴVnT -(E͗{ _~I8mN1cGҪ<M @+?zS]gˇ, Se2a!"˻pZ3Lz~T1S8^U/e\Q$#_f1ƫB1u*A L~M֔dv!ʪ2)o8)&Ⰸ7OQX{("l@]y<˪O_(|b+#p؜}"Fۤr5%S͡qkDHTxBORͻ7LJ~ŻKU0>O`~E R۝R"$nP0qlP4{ݎjA|v\tB;*{ً S٥O$0OC"pmS!P4x ~\ 㼬>bpN]G;"/90|CUM3(nQkQ[JG{bXB0@ʀX402 섉hОD /o E131o-hpj6z+ߡ |9l"Yw.ޡ8x/ƌ04$ (<-sŻy|cv-Ai~Ƹ ky㾔+0  pk3zN7r )0@R ,p٣) Ay ( "AO5rhؼ[RA<KMۉw'V9zK+rwyr]b/nKg?r 3˳y u- n8q eSNCw UZ NnyroTñ+2OtԵ_rɩ/,N51)q(~WQnA(+afǢ8^u0]LdԶzOa ܮ8eswۮsv|p'FU4uQ#sN1[vWu6nj[*^6t#芛>D0 7בtHݪ?*wKnnl?\+CO-s9pj=_!׉hs[xT[)$ah/--#N\' 1Z׉h7i׉h/r۾٩rn nl5 wbh%.;&D;;~&6]^ PtR,cytu(rt\$s'bS)R҆LҒAZΠva\xܡV0UlvMtaΫ $EqhLbF$$԰@I0@۰*>#!4n=& 3>?c04$qlbBYBRfdBM D!JM_.7q)@+{+@A[N&,1ܓidTbusZ*?M&F&3#2[aSj\HOVi?^&e?+z4pkT =bHg(Oci XFqo,W(DīHM041;2C'!ɒ;r|D~,sF' eШG^BA0Hf`AbiBs?i5;ukXjGH:;ST #ST$ ,,RX VWsnl6-@ˢL6mrg_s˖.ˏF2,Y4%gV߳EΒh|QLgprmF˖NUo3 V%bv_E!q|qJn)gmS孓O6X9a>-O,p73%8[aDI. 4@\!y=4{8 Nb>U\>AnFOkQ={Cq" {= x ,v NnqhW[Gy=ףT( J<ǝWgL'ސ!BGxHMǚ'-Ut{t EED~3 &[%~ϋI'Q) tT!Qq@pUUc:fUUד|ٛZ-v]˗5,0 O ފmZU[+Or-WV*SԛKyqzA~N-7%,n^OQx:- OΒI^ܣ.g`s-bxZnnЪ8o>fP_Fzurʋ]è-N ]\5z7D#1g3;гЋ\ A"0HB+13/= %?eu 5TjCt`M;0@SwIEcb3]o\}ey8e {G2=_k fkY7$IhM!Q`F8f#4d=5fz"q̀'T}> B6Pi!!Dbf{o^F7Ie;t*۳WٮʶՎݪgw}vݦwej-5܏TsGJ- ڞPk?B ^&ނP{{Bw$ނPgPg B u#ّPg BeB-u'ݏPwGB- ۞Po?B _&߂P{Bw$߂PL(߂P=|?B .B'5jwQPjn"ό:<[|tyWY~5NDh;0qy!tvg$7{RJ?>&GV؂Qa% X`w3UJʋ+ ?g\vӞmz}~2Wv^AetYR+н?3 ilEe]}nat((9aP4ϣq(9vNpZl|U!u= iG,TVC[Ǚ~$eycu STwAmn-hqjr|[,#4Oi`h.emtK})ڝЋ&*ɇ'ь C`(p"ҹ h |#0q`Tћǂ5~9NdAKOH @gmwg(ߍ9?XHH*)wЖrSJRmlâ')Zut2 -g<9OQ8CWMa0XƲ-ݭW:u>|=0^eI%3y,|]nx>MЖIK7$)q~`T-ˡg)ܛELs6?qe%oCaTPQlwNM_\Py(Ymbi  )(i$,.hIQɴlOվ.m0ʄ!|k6|:0`+#T =):)C@wމ4/e VNVe@LR UCI=i~ƕn[ т5MgĬxi\NRl6n#.&pRĭa\#vUoKʳjga~vua5>W \]YqHZ磒iwߞ "Z|涇&/g=ւd eR LBx<=W8J1&g,r] Mp/iR)pb'Vq#Q|4Chl,R3ȥ`I#2X,JZ0r1 |VYy+ބVɌ*gN혢׾u3W p3D5 Z滤YqӦ޳wZ/k2c;n羕vHs3[!nzN1tau 1j-[jz[t@\f*Tcډ_ \]b_Y%HQ(P.5Tpޮ#Uωb"vQL*Vv#3=a(a0jZlS`n$גBXQf.բl(({ r45deOE\ SWTUV"]]LA"&ٜhv G 4IsܤIѣ6 ?̊:Q\K)tW0G'd G3t';w;ºX:bDCz,U߉,& yI}6hZFVPB)e=`,UrM3KU+:_N[8fqXFOix=hrk0~*>˫Ip&8 w.۲3dD5%L4P2VkmHeԺ󰘧@}ʴ(!i})Q:3Ner$XuAx_3oqxcP ?aG'?w10_χ7v7d :8'g:?-xL^xoɉ'd1}nT9_-/ 72?uv>Z2._y_x{XٷlCc&䥫vwwk`Pz:/gt֢Ou|iDu>p e>6a׃o~H+춎Da73%~N>DZ#T_OT9W^K0- HJtd}Ƥ^@=Y`́ ^?[q+U TkLL R6"X'z%" ϼ߬~|[8ԋ7H=43&iֽ֕4N}ȗR$8R>B:\ە},8, x[*Izt"M!)ܮ [ nE ]K·A|y#5OoTys E{d#c3¾go<1T/]8S_;y*TYmHdIK-z:.WSB)ߦ/(љz%.4)Npn+ ~I.ՋDjS\]i]oZ8R$Efg"[ VI_*:!FWدY,n*pYxcPmbRH=E 'HzYAX=\g#d98c]*/aV:k"V lohxd7}+["_PXps3 ;rj`Myؼ% H\"l>(d蒣ZAr,z1FBR$(×~n&V@\7}IgK/Q" c2^`̺+I z6oAV!h(uJ+HtO[wς񘜍PnaH= vaV*C4# F@ninUDMH: cыɈlYځA &i>NwCÑ"ǝG%.*{=ѤH}r䢟@e,1kb4%KDg(b-l>o& duRv5yhNh Ц.-jig",# m˹:zIjxxDΫ9oPUavB7cBBSꌆ\%Im+:(.A?.#\(ǎ8;71$V ̓`(f2q-e$XjzYa9^e(-I)6}ڃ T:wRDERRİh9e8iE_7L JtJE ܴvUv\Xm!JZX `%9|:z-zu541M4dd\TMzڢdF/.z4r&MR%O̤vQg7jvLJqYY@ȣhT!CW&WĎ~o_1>ubW >8;zm5f116Hz,Fn`N۹ mtB=d3_ !t(x,:YF JWJ- :ѓ &~3%K7qGFMĴnn2CxKV*ZNA7u2mc]D '8fѳ8ylQOK Qf [ 1_9 ʼbHW* ȇZ/*(~=Ϥd ٬w,8jΛ pZuQ5/L?yv"Y$|h5[ɹ!xaȁAD誀1`YUv]3E {iݜxp9POȺvoXp?(/ݎgpvϛgSe4v0@tC=PZ 5"Z ;ȜmjFb]I7kr x]]v-*xV~Òp𓣑^SfU "Hkz FhQ q Ƣ|,':LN}ֵWtv v&LE|gHZLYJ[ :YfetդB5O$H:-4WYp'ZR=z/q1Z3 ՈwI'8j>~1^ÀHL`FF4Y7ihx8Yc 27noh4kNy =)56i"7 Ec'&&.18kE-~Gt'Й ]%<iPu{yp3ю!V'[y% ~p&ߏ6,J>4יGsYMJCG rnC&Tˡ7pId+xeQB?aj=8gcݖ-~Q_Te9Ĝ*d*E5Y˂w%Q|Q$nbXl7$"P>oŎ炸kS@|ѱ8}^>gрK<݆,U9:\ƫ7N.D'tL΅7y<;?7"2%:YmvcdLt =Bkc#¨UkJ5ʬV"2wH,9FmPL)uQV< >ȋ byY_ɭisNz6|ő޸a$.\ڂ;+U=|BJ9AɍXz.,.sr 6IhҠS\DlsŨi" nV_^EHUS_^G*&ɻ=wE%c?rndgG !{绕Z*Ǚ">{3oj3$ͦ"&ZcVR裤\CzoCü4pp.]B8Y?dnsE0y?ڹFc$L1 ܝGn0?AYtzDW$. }.G8qȈZЛLilDJ8HûZ(E_O/)ۙќU{̣yzrjY=ʸ=G#m9k5fd#jE+.kqQ!eJ-ۯq9|vőm*xdHj1&cLȾ`'|J_Խwl"_z|RcxdөwKՋ f[;M|5] 0QȦ2E#'PA҄*y*73AE-L}J7q(+t,ZvŦFi(N|Xp<>8yA613yO;3y3z/yY 4ݒ|;F>YxIyF@L?jQ\6HAll{&x K}U8 wـͥr4cKhGn$>k}A_w=yWlcٕ1cR򸿤X'z'2S$Kb'[ w#b^XJj*!T="k 3&lT4"&t׶pnӖ=s&YȱR8ފKb]5]}GBnA ֐id[}nbKYaϣ3ʴ05{WZ "iF Mnnm@4tTVNR(jUEV}@ qQ뗤+!k=][_/DVjM'ϑ;z¡̽r;~JjС󸺄1$jɤtcIS.LX0@Z{Xжۻ-aDtt1['[{őT{cͻ{'̞{d.uZKJ;2lN@1{6M9{0O/v74u^8,(MNp52揵Q*wE)9LqPe\Uσ]l~"h;P}(خz&nxy_`]5#+ߎ^gbf}{eju}ю!Bc ه ʲګp4t-迮\w, jQBלܠC>M? xPsy/ S"ey?@ GKIPyMؑ&#;v^]qDjEhDU+Ђ`Yҙk^}N'QU3 d)5i)htaT`QFό{:G:dž/0OaV@wdi`L%^Dat&GٛW?LrqՋOpwn ഈŋB?T#u!́7Ϋ5y" |+Ey/h\L&' ||ʞ` y7(㓪Kܭ@O#ؐ4 uzQ$qD $'HV F~x!x|o`0Usaxa²*W !ڿ_CL-خ]w`~5s:G \󥴢aoB/*4J$PڮVy #"j,ۑ'ނ] <>(worChYYhPHG)h?- ׇ{  2g, F.]wl4wCo`!6l׽4%fݻ.lMҖfDZ7~6vݩMW\qx`ޞ@l"P[Xheͧ 9'#`,!GkaФ'[W*rKEMt]-\s*@- V{B+j9; z6MH"UڂVi 4id'qPFYHxnK* D%6 ؃ݞГXsɓH[0# SEOpԢxCކxCNmj$ZfH\;d76c&CtƠŒj *Gs{cP"á;DhmpWv 0e-0tk@ n4pg$92LZ-4S9҉>.gn]EO@O}B6U~(_74 "au`ƿ^=z=m~钲^yu_^`ٳ=Ogѻ4SOweRQo9A}Jh0#9+( d~8F.BL2FOsoDP.@1[hEؙG^ Lxr;z=ODɤ ^\k0CN;EZTH:5Bk>0ߟi>ɏi_jmmG)ZwmHP:8BtÆ uBbDYO1wYĥqAݷ۳fvEw}%bb`!#ɜG 1EK@0CIl{,BxU,uny%4 EZ/,a/Q)Vf ac/sʿ+-ԝE +pb4|G>;H}?C'<N_`j na\tɳ jm#v@}S\*7opڊviX'y4͕@hW 9v1ݲ?-;{rO:*|Pm~V.ܕ3ZٖRk ; h<M1;(T+ ƴy'M2g!^{"5|ʺ\w9 DZi8[;!UTc99-zO9iO5@.kM@^'SGptւR,&7n qQ8l3o=cb^~z WG*KYt/TCQ}J4s煐ʁp1&Ęxѱ拓CbF6];Ob=3')UDWt,2۴,qQܸ$hW &Eם8Mw@245".jG\P7Hpa^ރLclud4d4*잇˽s"r0RjF.+b)5jOQ1Ycl@!`OLa${RܴFRܴKr8o՜i]21.є4`(ψ۩@FS5S i{Xז5_2$浺L-eP< ɑyF3 1(O 5 F4pڒѴFѴ%Ւdh!kl?zz4~z͛I)3ی\vo3j.WFP[f +]0`*uiA?f಼ 7yUpzV}*%> Bԋ }8+9. 'Ae~O#]x)?* Ek8b^r^ *޹,ݦ9>o$'OH@[_f) <,ޚb4)<,t ը1+KF)TO'8"A)V NjV*|ᆀsZ`ˠ'4|/bEP" "YU)<䫡}3޼^ Usp+f*WK<4Q'DpYd+/A•Zk$y`x-fU:t&v!p}};FPbLJ:OrQȻ0<^ ܬldžvܡE\׺Z5{|MoZ؄LA y#4&6aå ^ H|aRZdg6@LA= +b&A9⮘(wny { 9WUTBhw߄JH>_ iDy)$O|6\'Oщ}30 х檖{C $h]j_WeVlÕBh?_\O-3R|:!/OWO\ܮH=q)Mx, g$Q[|17&L`+oAI&O<\hSľ6at^!z^!jWȥ Vn[ax{3.5!aJ4KsLz+qh8EoII*p ,ڼh󾉖VV?E>ښӲ7Q+ _9{p>øV*A(z(1%Lu])nܖԟ֤ .KUEט 4_Xm|h>Ig[|+s5FcIM/-Ic(J>|!Ua[X;J7a+ÞcnR2b*c1De c .3xixlB)l,Sy5+ *ʼneLdҘy8h^Spm PJ<`byL\ 9qDĎ3x0PL;kf-S@|t|{Vi'&Icmc$!,fM56D[ > JTÁTO:\+,[5%!-r[4/0vIMҠ{bIqBZ<[W7#Uu5x2cվF$NZRBLukEH)H6+)@UK1 Lp ̴S6^LSBNjكjNTݞ>R4*Uh A&B&>+ZWH9BӇ//1$L+YIhRܒjkp[* *ϑ'4ZgUNOMU J)WSHOV<2¤ZD椉Q ,$D odug2Ұ[ď_Gp_g X ό \mWHfNO9 Pp @*gV8K]-j㝲TF vR6!KeTZ YVurXRhUg{ WQMhmZiaE,ls tWm6,.NB>k([0}JtVCF^GÉKhEBfv4E3I 0=s= ?Bs3=͞qfm3T.I4Z'=HQ* OSL sԩK!ϓ5ߓrZ*3c=],.>d|0<^H8Zm pg#3h!v25A,M3'a]Fv&Sdhr \Ar!u!ݓ5}7uD 7G~w8a+ѡnK BZ8DT;ZJSZXz#]6]a%3**~T)ޝF0FXJ>El2494grԜ* ݙ*"cuoj*!/TBf:MTnMN'չ zI*!g2̍s@,uRGJ9XIlR #g^nS8U֧a6S쐎&MY\Lqf09ΉD:<'[ʄtk9Gcyqp-o׋BKiVӥՠuf5Zqh^8J @7SKZ-tF.9]Q-6NP u^.{PD5߷f]<`m:0'44 4q[ku+.1g`,vZ;t. Ak(;w lN@`Q)S⽺ErM~%JqGJS8u@$`0l$\fhံ nS(]}܈w10x!B$[NGµ;j 3[Qixy5 a|*Ar0:=M&s>OFbzE毘Kvt@JuwZ֛=IuFGp5]1{J&xY2#Тpӑ}. އ@ .B!LayC{Dµ%A[FZ0TN.40P Lil]qh82l}9]Ay/SU O~F36k9dNyn+FE |ٙhWWE\{gpBq^r2c:pf.xU}'MXzX&ӗ\y~}TG  }QY|OgG77%2L-LVGXPJ$JZz^/g5ZmEP@N^;T_2У'ɁWog70ԟC P8+Z/ze" hW |S *Y)͚[7qS2~FT <'gdxMCtS՟?y4h@&R.V5Igϟ?lq9^ƧAUPh dJ.Ʉ~ɴt)Uxg>oo/3I uf}oFq@7ٯz^mZ)lU)_{E/_ɿvrRz#uݺ|Wu"2}c>ge#naCX]ULNh|zu:v#W,+K/j2,G()fxj})|`~?_Qkz ~~j?g'?M.~mͯ/{ߝ}:rkG?^zg|ϸ{?눰ѡ?{0Oz7ҋed`lMQ9)~u~zq~F?wZ%/59bt+pH-` Z >C^^={F79M{8Gk%Hb{ߔ?cZykwq^NԫEPz1;#$QhЛsޅVHL_7]aƙpyFkX5h+n3h-rmk7Y&9rpm7);%M+:š6GS L)[PpfpU,QhzնᨏP8|$7~<w^s2xuWdM,dJF1lٛ.fh /6;=|, U2 Juwuɢy(;쬯/˃6hϯ֫K> B@Kisjry9 i`ۊq{`篙k`C^OwZWa eL6{| LxFs%pcmOx۔ >vhy 2%! ټWب8ֶ+V p|r: se%ɠ=<>0\;5Ko)5C[,̖Bp9oxK ,B_m\K JLSI<9>5J~Q~I߀ʃJ\5(7&4)0YHpEmcѪʥusL*F. bWmD .Ft.!Dr߹4 ̖C,_~n LZP&w9Mb0qUZ!+ŴHRin ٌ._~Nl.9vƷ(DeF% D R8=zǫ(5iwOmdĨj+׉l;#KLsbdtml dfa/o[U5RmG/TEnju{ |VZˬ!.Upq o؇k>/h5X.^^-6ʚWcO㍮XU;DYĠ3zZR`!+'@BuQ[F͝8sXĠ;F 4yC c7muGEuۆݑԉbUI6j&g$1X?"3#NYS="88xR$Cvwbw.H8mn7 es4v[ۥhD%9+0pz{k,}]iq~zwC%+wm~~QjjGB$,2xұ=3lt2cܥ_w 6ݥcԗC:'C| {0ë1joU{=NhȻW;`xpnx vGboeN/z9i6^jLx-kc}09뫶?E/Cu}s/9qZϓ?@\ow7_n㯾\ki=?ӯؿb|sj.2z ~:&|럯ծB~p%?>_ʏkŊIͿip=M2Iy>+}? n}T1VWg^zWn#%F4F`^e3;:45s$ ?s&lw7I?Z8+ƞw$+51'96s8#:N G)=Z^Yj(1_GmJ2ǝWÉ75v尢nA\.Vc7e={jЂw۔qJw$e֟ės?}/pi[Zv* nu $T f Yt,!> ^lg>7$gbu51y Ʉ HR$$_'\Lr$ҨbT9NFhɱ|R_C31:!QA͉U 17Mu˚k6>XKCGY,k 9Ԉlf!ZNIW}eSA|[.'$oJ58Ƣ]֫TG*yu |TDjZWloj{1ˆTUTCR% {78 ΍ʔ59M-loԘQ;rB[l^}! Tb{=RۋdL!3)R-(m[t"ElF"b{=Vۋ@#bsb{=`Q2YHhmPl^UFf > pRe\q2EUjF2uMY\ggBNiWxd=<ʧZ"} ۮzi8Ew^/4J{˳K->r.wr'ЋMs{.EJOff ֍D,@*^5Kuއ!5uKbVL_Gз[}S|sw#C~ !6N I%ɥGcȏC4 q[A5<V&PZ 6|sA}_둳$!l2H6>#-O]6}j]tb=:%|۴Ph|TXi٫څ>T-aX1^Y2&  Qz|BqWߤxj.nnj5Z6|ckÁS]EӇZǶmC4lH ZɋGz1rvUKVzW*`cz#- HKj:H.&XW}Ӿͦk,F_,G@^RY/K$Z3Qp}}yIݱfFb~ځXKu*^Mh;PU&dC}#ӛ'$ !Ӝ_ eճM6GPsYO`HΔIA@u'HT^mhq'cdZO#`вIdlB(UUx,UL!dݲT) `Jv +#Գ& -W0^zR p$30HP嚒0ځiifȀ v#op'd#d05qm]W& 1Ւp4OG䲚ϩ^ջVzm4 ѿˆ"ԤbBl@'Nr OhL/mFtr56\H/ 铂\LilP|y|!XIt\z,/|i!(|?_FXwٚˆ5TrLh7$Y+mݠ-;8vF`k !*bo36Sx$ް%[x3G2vp7Ȭo`oSx3$޸|fA`逰u׋ݢw!}F@N @eEg/ܾά}mw?no$rMɇ\`e1b*iũj;L=a?hr(q![ ;J(RqRۡ,Jp@p1x䖦d#Ч4*M,wK^/g2L̃xsRJP*-Neg~XA).!!H6 ޘpCNUE!44-ȜaP$OG抳R:ݐ (lI?H2o2& u0S0$8 \F3"h[l2"<ψpb`v8Fݖ.Aa&:Xl0l<*{@l7$jfmLo*5]۷*J<XsomCŊǖ ux!=8|3R FBlz *·T=WUj_UČg֪'a !AJG?aC,߮# ?]l|>u.zHQgzvg_W^!֟-.j+/tvy>F4[wk8=&_q ٢Q=NͅmNpnz|pu(bGАN Fv5!΀Q;V);2Yw'q}2H68_ YoGRȚhPbk@f*kۥtL ;Sx!nmN-{.dm;un[xi' n}Z0?'$h*y+Thn]WEȋR7? ²jb3@I  HS*]EW7m (52;* @#\ ѷ 4vƞZvf=y0ᰜ MZ:S^XBHg0@CmDŽIA367`b@uuqDH?;{l#rԕ'}[4if2H>܊.h6T\Y2]ƶ :Hg_Lu3诼IHBl !;eMU,NYf=fuT(5EW !IA9O%y-e<9n΃A8 $8) *7u߆f&%Kҍ'o/ul\0 $x !{p!ÈeV<.dC QCR"3$H2! "e4wK\]/&*Sg&)I!eXoPP5$T!U+j롪}j2Z9@n?س_ 8aL$a*(ї_g (3CXu=Y%ڄPLl (^Jm'u{{s$?S @F(c$TMUbOv^I[ Ҿe^ӝX\47ʆCTƇRƇRI q"X12 #lq*J5tK3sd:l5/< T<$L؊tU[<"䴒\BvhYh)$)B)V=V)>@GX@]D3VI@ PD+2WeUʌ+7Р(N |J$ MlP0_8n6sIGdc dS].^pZ`  /q]`R$<$ 9y*aښJq:zCE)0!nbg2AfC"I` X[9aoWlK*!i=AX#cc#h>]4RIЉ+&p{ @q^ H k'I:)~Z keʓXeZEzp#i\q]GXwTH\_!Debdlў.U@]W*˧7P5I 9ݝwtozJ&hkQ |d#hX-,|!Ð[]s6WX!%$SeTHndI+JxS߯R@LʖEh|n4p.DY\4%mR oѕ%CJ5L$0%*?\sЍ|~33Fwv`[<S%ڲ5 a.C BMWMd>V>_62J_Wx?LJEWH% ];3YRָ73 $ +0/w;dzh"',,`=$`A4Ay\l'1ID4 C<2z;Vt9|4w.J.|m$( Y2vmĺuMV*]XZ3ޅXʋ4/ڳF xK1a\?D)7(v Ä|,7gv妌IDfuKUhw~σuy;ϾEՂoo1|sʐS}hSo:gXʚzE)  su~a C RZ9!iXRi{[@nec:4*"N6Є30c2ng{v7URK2S<ޑܭ}΁'&vXLFWۗ3̊i)pI1A0! ,EZoL3۹Cd})n˱Oe.K oDNTF~Ky>v&R|X;3KJt~rcˁOt v͟P 뮧6Trxg|M>].!fȴK0V,H+\N[:I4jIg 3e| ܂$'|GfsZ:@^1O&g&A\G^4H^ekh쵪\"wTHIw "`!f55_%ŠkM˺tf?<q$(өaj"$HIJ3,V)q#'Ʀ8XW"A|3[1 =ɤHziٜ#d||0֓9ޢn-띗'E s QD(KSGd8Gvd⡍ olTlfn%o'קԤ+0M-L3^̏w&yeMON\=oĹK? _P߽9 ԝ'@ڥYqѻ7:`V$ T(pؐw&E'5)ۚǙS% R8>eqx cI(%NpEL&W9ė4t:m5ֵK6zjLYև'cx9β|Y(K|u+Wۉezu(kgB_5OhTEd-Sl& mH8VgeSWK?-ۗ`pͰO}.x&(?Y .dHFvtt)+vl>NѠR};~&32v~|h2b+yy7O*Q)-[[7΢ 묿S~0 F 1k%.1 bA'.q-!#m܎-rmn0o4Cw!0jBcf;l̬\t13bOKW$Pb$Fo$v]b\0MVQ'V"ҠE>A>˔@OޔI컏΅>8!w|۴㍻=)?8Y_4,oUG杣 1ihm0S 0Qa\*iL-,fC'^mAMj)hl$ &ഋZM<ɊKNzz/fgf)#:4&l4D+ek; yEmy%*DG>1;[SyqQ6㛓D6vgΜ)Hka(1z ´R9ƻ/|r9n~;ۻ'>9> ɦ:0q`Ry7YZbRhX6ZCbR vػi V=rpjd ZRX%[vHE/Zg8t﫻\bhν/\RLQCs34<0׃KCùuH+8@B1o48 N,8/Vb:x4',Mp[ՐٌiT &Q4mh4"lZXH3qg֐"H"<HPL4w puAS:[38{~ܱz*"{ =ҔW*]3Nbxk.u=fdq@?^DyqipRvXcM;!p$A(K gBi!R%%DȔwVvC{j"Rt9mSEK\!YCf^0ͼzef^=_*̫ Η*R⚼T ["4>nאRogP,/h:,> (R7\P jŌ<` pj*)?m XĂ+!3\bo@O*Á;M u`brjT7ޡ2pkjK7[!$n%FnPtJѦB'%&╭A6^l^wl\wdɿ Yﷸ |3ܺ`^?nI|kB+K=nMT s僆f:yjHs6|@a\co`\\IF[>nׂIl|`Ivejͺ240)} k_l7\`qW2 ǕƄ7D5`jHysV5THӡ|uH "QBKK#.Z&LRet6 Ymzʡ[#;o܌ndٕO~1VJ̏GR{BT*"?ȷwDWyp;<*oC xt_ SE_ 9j?9Z'"Me` MvтC~RlQȦN-hxl)cX'( L>eQ26n4p.fD6aꈭbgDy1ү̧j* OǷ6XE ޘMڂe h'B D+Y gyR gquw8zVDh>I9 %ԌrwdaB]M1K"ʯE"EW]yhgGfZa A3R\R>LC0/$,BYzhEU8&F|F\b| ?}\w W>E +qHu V]B0l,t]WEȵ`jTE3 5fWC?_>8]ـ#>qۿAhoB'`|ѼZ"nsȺds|Ue2J3#PLOx`ƞ 8\<qhϽOPV73Rfd{&jEʣa*p`OnlGgVR`Մ/KCϾ`l_Qv} _y)N)Ih:\^+̡ޭ\`PQu:O&z;'lp5/ V`y|=;Kz+ۯ=M{17q 9\y{NΥNGp*L31F`?>|P*ۡ[*Tqf?ݏb*z[|C{~p aWH&+n'\CfxY:Z_-0d0vfEJ~nxWlue:u;칲-o.6/;*j[PEK޼|$򉊾ʖ(~%>z!=q*qZ!ӰhHy1@hbprOe*v?97w.ijZ1>2q2e]t _E?2J WBpX3cc*p"x&"2to.ٿJHkK@[{ӻ.8|f_P hr0J  VYp3bm8%TTXc S\G_$61RXb!6 Ȥ D8qJ)fD0ɱCaOؘ q%˨HҔIihsNe -!&C11̦ 0ًiQL<$Zi4(b)Cc,&PT ,&FXȘhAjMY]EN77Tt)8OҘP!30F$ cNcLseZCբ&tB*8tfhƌAx"4Ȇ ɲ4aY$6 J;v5L3R_Elڦ{S MuLq%]mLRp G DRD qAS !U>A BCCԤYdL&YAlC)XH#IL?dh^e2 & P,XY<ԱB1Ӊ~KEOL4VΛ{!`׫,3&Hjc39CZeT+ $ip$?!Ck|h3jAw 62Q8 0 Ț@d 2jp-9?$X$X  J)nA2hNwmY4Yz?d03LƻN S͇HK-QbAb qhu9Uu=ϹL*&&LA%Qtxe2ז!p#HЀ6d2n.sm#9 QD` זfP % uL.ӌ]N24ǻ`4:ZYH@I92(I֊kap2^vna$pk0QOQ!V*ď$ G{NC !12Vf¸д!I }`9=}2O&0TLQYjc aLfL0Rnj]`PqWpٱH>fpeT*\TL0k*@&$ؙ 2OHVGP((AO:d[`#(_6XL̡t܀@/F0!.%PF eRC $f A;4HO5Jr @ -F"YPEVBHuޘni$M!*ppGy>oIU͒D /Ҝ4I%>:TYΔE xD#neYf֨*@BMJ(4@g# ´_h׭$[f̲(N(FMB0(BNV:dۀI;[7&^h98e׬ sF=F@ueMF) D)G ܞU9Ҷ@,ж#Q@#- !-k9Evzȕu!ҮzpJ$nC@;5>T2^'C r8eنcBG`H d~D… .OBQƬ\hK-1&F`!(]w}4\ Pkt|UpD䝈FidoQ[˭^e#q\|B7Neـ (HP S |Re[rYR<\ ?{rnXV˹Q!u9`Ak7p#7\ը uAIըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:fAe`ǨVQG1K7[Q2%)Y:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըS:ըb:F`Q]q8F}'gZnj:VY*QuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQuQu^QwMŝFћq Z6/_c>l-/7~FiӟA^NaY vg=Vw\Xk˞gKwKE.=pu `9RJˊX`w?Z녀UD &e@R,ȠlG-=@ˈ!FXNʂɁЏP@RBա{f;XM85 ,[n`1q `W`9C11"8?YESlaVjtB*e8dr1? f蒀qj gQ:Y?rRH/DB}#^_&d6g)|h0YvOhӋQUHhY[ h'7|۫'ËYFiZ6X/_hqp}΄*~ Ff̍|n]<n< 2EvdhxsC'iQok=v+?3䜒}Ë ߊ`/pYg y㫣SLιo.5r zd>njCS+c>Q$ITFb/m2+;:zmˏh㺓mԒѫ5X-EaN&['G fDqxms <÷[@ϧf>W[6j.Uɀ;?6)^ yȯ>Q%0uRGimdn+x#S5# ?r̗-3%-'u\kY~xm/>mڜ|C/7 _de',oFAK_((2ot}Y$0</k?,5ޓO;l`cr}nqqږkŰF1m⛿Ňg] [[uo?'?NFg7𓛟/rMOPOf>mKBXYS&:mǣȨv cgdL!xv|/6mi,wѩm}Y^Q\1"Bɞ67lvřo1ޘWl;YOu+"'z]2Leje@H$@Ty|JGqS _pCr\p\m3n@NjɰdEafg;Åk3y H1YY O'Z.|uǟLjL& I ceЖ}5}wcKMj6*˪_mVm`"( &y^ T=dCa+wٿUyW+~nKJhiM PI>RgiFtH3v hce0Dwӹa [>.k0/e|l,q _LZ--n 3~u[@P|DTvC]Jҷs;Dvof;jYwdTVwdUuGFݑQwdDeY /~L-e)8 BcSK73(Jߒ2F(B{Z9\ǝSZdBBL"e 9RLj!9S<y*Em!dQW/CE-ʲy0)}bU,cA}=V7a ~XS,BĈ"'V{X`UfZGr 8򱐿(99-(txȅNBV$qLB(s1Lj^BхE>rg'!|`VY 3ma \\PeAb쀋# f?Fq!AHX0Yq<;# 9GrXta;0JQMIƘv5|qô\WX\tC?oO8 68T6&-OĊTgc 58P+3 p"g9)72Jyd"u$r=#y,|,L؝pRKTF] sHH۔%/j"{êߞ՝,hS$%iSҴ3Vq#;dpFNJ˽"#I,f&FjX|ݣ4!G5[p|}/! axX7DCUHBZ$>]B"W<߁G>jCtaxg5'7aJLem*)um(` QcY[< Z 4< ^lMvOlwOE>b&%ԆL'(0;s`DC/]̣tqcѵj|1aHe]\`q5.Fag1<:GBnlCvD NIRC<&mG3hFӨ/7yu In͍HsY]}= %l*zQ79t}8j/7{Ə_NV/GnH@9Br7Fڃ3FxOjӉ*Azy`YQH-~`F=?>Ç#er%km2*Cy0o6APz/vG.([>]uȎ"|ߞw05LQpYu$qjv q"5m 'o חEj^ 8  BO<ٯ dAa x &g`~ ! 3l6' % X- QxGK N2ΐcxt Ǒ|ӌ-hEY4y!QVGF)SM&e>D> AGXbPE>Tj8YI$A+{080Ͽ8ǐj/*(5fZ l0A@3Px ɹk㎿iyŋ" 57z7@(ȨT A( =;!2MfA<޻@FLarM<ߨ0`BHK(F̊$O(~9!N2 ѝ(3&NXnQV3M\ bMo/ٟ<G>h14nŏD`Е95ӚnrR^TGsNyIĐ\Mw: _%yUo$-?/V R:FtMy60f':66(0FoqsďlQ1qAn| zdq6on.RO9+%hM+QFk<E>r![2[pMaI4N69)8g8^ib;#5,]Â" \_`͞,a@sXi@@2I(Yרp5%7AF[jPg8i0ДM8ϑ<:}`lݷ>ߓ wO^KyDn|u4<@` ӛP"ZJ V_z' uQc!7|b¿}Z,7%9hDN6:XNȹk_1>y/|$z׋aQ6ޑʇz*_N~󘄅-;89wMvnFޒSfЅ B]Cxʴ‘`ts$.3Akυ(򱺰zt|خtw1 B==2-s1C*g/ lge *˽"i1 e|`\Xp'7pH\|s_5z/lȖ֘`z Q "`;Uڽ̇y{" ԕy3.EG)wE߁]h׸+/@o"Ir zS,e?t|,w4$ Mz;dXI% nD5FNF)xz½{ofFc[T"u5qc q?jBȨN *nmǢ\+kv̈́{xC3 ߿ P'<Du\R=_y9՜nRHM3*esI@(s]]lf¸Ma|FSmbDm]rݣVXtϻ%K 1nXPm=נV.pJg'= pQ{d[ O\҂d4d@ŚC=,qo !xtwPc!p i'rCR]P⯙W"Ω5LC* ߡShX= eU2Xo !fE!ZD2οCF!G&\׏#"N;C5b5MOk_,{mxx[q #P'S Zk |)EO>뻫nqh6]>|\4*yՃ|IIf1SBneU( w/QTrWsAJyFN\at.WO/I \&;'Լ, 5arWAU÷R\Lة4$,V)D|vkN4q\/Gpw10QK]_;9c_C*"Ȩ1@n^Ò[|ʻ cP(4_#jDvN'DS+` ʞ)3ON2(繱uCݍh[~'#"d<Ƨdkt1-n7hE{(k|F;&h3[ڑW?t Eܧܷ~+kiS:Iy Q}*Q:YiČfܓ'1OmϢRs?\Oͤ'w&ꐜ4j EܧUhcBw*I]P}z!}~1&[Ps/ *=V{(>]gd+=ӴC]o>Ƨ<^mIbQ}}m柦:|itEyOw3$CDf|db68ŞN|~>[\g!Fo^3v>F̻"v Vg(v58 }%}ٗ:n֨Mm'9<0X;y|OsWiz;ڼ]TP!盗oG6N` k|Ƽݶ.w/ si͸4 ^Ơz#SPvMQkp#{_S4 x)^Ws]7i^RCܧ?xCr*zx$9ϫ!(S0` ՇB/ \bMPOYm\56Grb)*;_!C@|NAE vx`0xM@3|6WCJ2+4n 5OBIF 9GrXȝ rcxQw7zdT|@Q6u~ Ӆ)#{h|o}'#./Z}NX{)]^^Ze0W]irTxW$;ٮCoUQ5[\U*QIr׬H~SFy'5{LӚJpVQ+模 JIrs )XhV.aIx"+G.H l}Ő'FvG9OPb`\qf.yŸvGX|gk v@E:BO7}ybrDzdT*p&n4h $̙aM5u%h.ʓ6$G( sߑʪ9XJ׺c6H{?i YUkTѭN bhUHBH땔YRѮeGEwBeͯ5Zp9_ [J;wxGfD =.Wj@ymm in iXȕgH\uRˌPVL lω4Bă.ykG?Ы$ {)%'z;OKA5M+yXa2sd\Tv6d󰝇w[)0TZ$7ai2Z 3 r`MGEiB>8!ijk-˪m*qὊ$+Cb,$dF%=28B>K|Ӵ#.Rp5`H !Z&vt.(`M^q]e͜lWۻM#L%wl0,~+L8180mSuSPcqU|L ql/~oȑQ`/ ̉$|͏7v'1ZOnB5췛e ǿ6~o~orИ?M7tG%X o4g$6eǥW yvy|W?mS6n&80D}V࿯ds3y*owG)q rSɁa%Ig;=,OrW?hw~;3N&T~>j~r??{϶:y]0$f1)R;''-J,_cK֜ K,Tzc^L1x.pWgiYԠb6=@ʆh>Jϒ/gšӬ?U<=p9wd^<(.4ZRaJRQh,6N~KRSic j&U u64SM4_xɻ͖.y2e XTf餀 mY}<} Jo/ 20.g ^4(Ο,P 9e>e}͛Link 7խUڸpuw*\WqpMW ;~XC&\̿h믺c3R//ս\֕ cc|*5a˚.ˢ,_^[=+TQb'~t[+`RX]s ~jTS]? B&I嶙֏(!a0cY΋0~{%X|stR~oe:A.G0H2I<=cgnV*RD9{aW.wgJDV3P CJSbKwW,T<, iϣ:=lsyH Tet03 rJ٪UjW)cXјc(sIc'Pq #t=Pe?Rׁcĵx[/\rg)01)M[7`jd/ "W8ɝ1sIc8ee8e9cĺc¤WC"]1 x!#r>zt>`l@K[0n7rLaKo-mHhL4/ҵ,g?b"adԍ;K^VTd|up51#h?j:;Rx4T`3„tq 68pCcO8P\;Fws8NLGοC%I$JĚGEH#EFo;m~.z"q&Kc-eAi1cj(sc@'w»8㞋=*(PcEрpoO4LMG ;1*$aTߊa @@KxTQGa!c|O&٢|g%{ aVH˜<擢AG1f6Ψz;;z1S\")|sddUq}d=ɣ[vAձeQs)|(|k"ՠͲ|rB5խ#(;DMJX!3.KXBC@>Et8pHoN%İ&r4dxlSѸVw =K`8_b}.|[WwqnJ}<Ʋ xǻXd,#JK5rwRjb8KLR #3.(4,P)e[ek$wG3v<_`:J IdĴ!6#UJg6.ZR0W7JA! A1%AE|KyzCH kW*b_Sj]X֜`4ylq :=q2&Zo"s:p|%I@at+q|묕c?wuռ=wƻ +ׁkDxsо>g'HqW,...r:ju\W!K[0=j::O7=e(t/8w[e,;Qa!q!,(Ȥ,2Ix'y6-f#L ޒDۻ!V'^ i~@twQ:)j[jr#CW D9|64GָgKG}s]`\zӁg0yq򐜢>Pw8TRL^_~XS,NH6R0R"TU @.-f DPҁgOSRJy>()E+ #+,'Yʝ`9^.pt,_2=j4`;>cx 1=zyoJZZ3ph{C\d,4{c<=ӉE B;pLY2`=G/8z&?}j`g||b|<Siu,a"E3hcPS g;){}Ӂg_ $w8J^CpAL\zq ~yR+A j>96͌zh|I|o"y2$(rp֑f)d:/CNc\DqE{&I\wzB0-Mʼn/O)%H8$sǢXoV gY"V7GhW'IqS8ޜ:1>@e,8cI44$$|{]w?x׉ ghyxVEwNawJq/v3 ?S=eqY?cuޥ;Fd)9չaY X8{T9`e m&$cod ?zG`.yqwFOx`ߔp^oBZ,A΃m{ 5MBe79B*r#7QP{pӚ0FALJӋ\`bkw*1 n4JT2Cd6#㒂"cCv &]xiZ9\D`pv h;Y?VO5HIn~ԃU/y;e띝(on 3b2pˮ$}3ŝo\F2Ýi"ML{: ib>_CP*|U󸱦N­ſ \g|z cN)a7$& )uEt?+ W+^b5D#jWّITO]da pSNXcN4a6 RnZi㢉=)gVnXbG5.]Ap.[16ظ*[Srq?ö^a% {ӸG}9ڪ.Ô a_N6 MjI\@IĸkŹ ꏸռ%#1Isvrmrp l pp + wIlOzp4rV]~EVTTlY|2mUGV ̠a0 G3*(g[cT+OwO>2xp3If/à;H!g_[R/^dsz7Z7vk+GwG+`<+`.`RnZ9Yoo?Y}}wA}h777T|b*OqFFy I$|d-ܜCBŝx hp+6)|QNm&7N p϶.JIcדX#2cR) S@[baRP #ml`TOR/|5L)29b'!)5QlptzlAx7f< pg+B=,ff!%L)1%Ă_Ij'DpN8Xfzz mT 04Q1`qXŤ痩,#pрO )m~C7O~EpN~qJ9 GQRl&z)qRc0=H±fR[|c'ߏcѧn5pt*U(hV|b>ʑ\&:0(-R rw<45l@?)TLdrI (߶p})h$'.Ԁ(4 `ĄW$bYFZg:Zpz_/ce=X (֢lxgΪ7"shПII,&&Hsp/1k]]feQ̏=:B`DXaxpCjch:*p%.&"伆g4VP"\ZKX0Ц]ߢg z-'!m$ 0ZqY$Ceϡy,+yq{޸me/)m4Mo M (zYq jOqŹՎ8#3Ch*o+y1ؖ̋ɼmTg$3j>ڍKWpX7Q0'ԚP>y3}:C0jpN1@/O:\\< ?aÝ=gcP}1KкLG֥ˈ6ieiF}LjZدDyTza[l'c c02[Rh}d QZTt䘏%ZQlG{,a=z9MgP M@>/8 \@-8dir+73b_^ M*.i:QCM_h)^Y+(wD^0`ћe~Es"NC pِ1&8Yo:/͸nĨ9[6"7uBpoN!vZA9j]Nae,y50-߰Hɴ\AxH!|;edV(.@J q[zI`QJk`M&xOb1*0X4 I2!Z3G.K h?f+>N_p!փt))CESAREϚc-.'YV>I1##d*aKTѵs[9yA%OD,`a\$;k)U}3ZhTz=OSd dǙR,ut~:8T-td a+*yˆ#'03#oGMmQKsZ?wZXY 6{Tz[:cLa;C&)!IVdй`QS @=H%PBEq_㴅"<#1bڒ1} % zIsۄY 00D% [r4 8-pUGf3hV 'tcڌ}fCbǐ䠤Hi+ /a3|&"]RJ[n`)'-hRv?;v cQoi~o=H?;nU@??emŽb`+{9׋lE @Gfc-f&,fNv|rvl23>Yq҇,L|:*GY U6gYu'Y];ws-,Aq EnPIM>6w`3^{Y`IV?01| EhQL% 3O $~{2~wMNዕ!6o=|Y)-A[kyDi+iPPtQ\ɋճ`VTnY+d@YFZ(a3Q\ "\\:svb >˕4ݴk]ugh5# IX3\r0'"uh. 5{>fF&(~އxֿ{(>NYEn@&/QI \)UwǠ1Y۠bv BRi#a8aglRTH5N#0kaȘw u9ʏfyc:ePJ4FALJw: Uzr:jn0wo\>݋QҰcK}`%KPNXy,,X3`@V~D9)qI(K]^Fǣ [0 86%cx 6eQϨəe|-:yv2();OIWyDLE6D RlrwlV"9ڐ68q.>RpWdZ#W5_7NC@V;9;YC~FxW;8L ^s?"+lm\vrhxނ쟩ץwWsf0=T'W P13 Vt ʍӨ,rXo%xu_ҺjEkGWRjd \0\@X\DcQ.]@O7ad3@~xӵЀ-[W#4["$eGE׽c^COQiş)/.Ou|:yɁd ϙW>Эy țT "˗f{tBRpg& ͍@9T&JڎM\f:\W,G_,xI0S}4P<ߊ@"8/:4 aK.^1ٲcc=@CBܩR˕|q}>Q S誶MNoeL [Ox~yMc" $r9P?ͨh' w;I1*хBE%V7o;r?F1jܢٿc[տ̧3xMӛTKß y_x7y3Y?/Wo7`/Avjap~v/[_.9;뭗@!P0f9[h~:<;?l[ho@.|taD1 ^3VP124f5 [0-Į4; s`Ohf,ͰnɛQ%=UJqj 4M6\$JDx %AqfA]X“<=*aC kgTS*̥P#̩XXJed!CeN-kkRM1D(i09u㲰yD"RK|O 4IzѸhZo 9D$D{Loa/N}]8B] :g N6\_˵2okqcWܿ]q *Trztւgn{\k/nŃ&1ӶOC̵8e;srPKmM'//d9c?y1ޅȝ؎zM`+iGM۰ɧ]7Ust=};RkVJ4WRw2 X>O. }SUfu=;#$ _m gP kvѤ9s%,ǯݫ*}ݿ~|û^o_{ :_ mL.l~, 6P˟QAOu먷,'KhR4 7_LF fw/׳1]*5ZȊu42bׄX,Ì0T=%#v~/5_{`C<u.C1DwDgtߚ'y6xSns M28fа.\mN]c_<`?k+p؈k̈́SmaV?Mookj=7?]o#7W~Y >$;_x%$E},G,X=FmQ|U?~; #ZH9J`4qEdCt8$R7ud Qsd$?=s/FmG4͍qPL"F' ;Xrlx J:B$dprs NP 53o0۞1B1x[($uqE-N[^ Y|: OfPWЀRV.l4ʀ>\<5L<0d?i!}ռ߯aso{,)>\ |&,[z?(ξ4 x' _Y&t%Әs¡_xvZFxkndA*(2|D–>TP}"duݢשp'풋$zfTs֚`6]$짃UTX /*`)JV)%2R"K\8/גjzKˮP%w%) dh^޿Pnk.-oUd>jPP #a (%zE(W.-Bq:NjPljcO{(aW Jog=o@fJ{bmV=dٖԏiϲY4q<4Gn;$>YXCx<,1gYRѣiSG̕=g^%>}ڇO>g-Ӛ `STIb.*N7 >pDHӮ'5ǙL:N^/_MKsx5Lu_a\qUo0sAGmk1\-ǂv۟ȓ\}67hکl_;~:q,'N]/O_[^źxя\+}s BrV ̌(Z ZJy7V{̾ ~Q׮3Dy힢-4^_YE@M䘦`9 ` (P9՞)h:(qߦds4p$*F`NI8A'jbKiKf^UgՏfXU~~3<W1x% /tzvԯukwWrQ"q638I$I.B^MF̚qtEW@m+,q~Dugˈ+7Aܦ1"RzkiVu+\/-udP> `Qi_ohJ`4/bO}9, kIkMHFryb@DБd@@wI(]Ɨ7/Kks e*j5{+f> R=sN;U{"=N@ï4eR9bQrdwsE$03f,ZMF71|or)K(hpRѿJeA'\4(a7Qt9( ~:-4V_g)뷧,G޳c\PM_+7EZ8SA'6)Vg t> JTP9u*W\{^|\m?Bjm\ g+`L!™NX`E1ZBj =q qmvsV@|JX8) 3d*p,=3Yˢ?_#}]ȷ_@;Cw>kX[n> fyW,|ZLaR[X-ӫ)l??*x/7Yz~~x,?n8/EJ鋯B_\ @ٗ&&/A;$ T>C*&H9s 1A)FQ),$9^3,'7Lp))q~SXfO7陫2ghq9pm@9[=߽rw 7dx;Rb'q iGI?f9:% JEA9N0$Θ#@ 㳣Ic ̽cƭ hl KO=R1xsoMu <3\b!AwZldIa7M=|^ܛIoMAV*}SK 7{{9{@REPr9Y18eM%t0{e@8DP $`# (YҨ <T~ 7 5S r#Y1Jj(:2eP}ȏrai-4c0>\%u0Z 3[G{dC_E={ZhTG/b>KǒWDwWw=*-E:[h/ TNB28s;NC98EZhTGW u`9 F^kkdYI%`zD6\Ш An\ kRi 2̖JQ!%FT"jPIxҤ8qo+yg+f}*1R拙\'$RÈp@yWNC!nI.8ra;2eP9LYq?07WfskOjhoHDzL!;TAKu,x֐60TABL 0?;?eM~8E@st<:B4D1)-)qZwz&Bpy9NWHgaDvD< EZ,spH&ڟ2\L|:!q2S*nK)VNd0C%5! ~Pqc[0f.ރyaC Fwj,DFuklO[>BOU>fu@|/¼Uwׅx{|% Ш N'GŢy?i**ZhTS2eJ )J!,{tAz@eX |iùL(f. aDuQIi]NQ@b ]tHFqsJ'XfTWN2ond}D'9K_mD*Cߩ1mE1mm:ɭ;U9풪VN9OEQ*Ý>2wC(W=%yzJls-4ꂣ{0<WTx@|wO.ecH Eܚ7 )g,0K顢D]REFepһW8 ds0bR=v7qe$t"S] .e! ƈXm;CTWa8.kh_EШ gu+۴*Ճ8{}/Mu|hZ1|kzjo$U-I^vިɡfeXz猏Ej-4*C}.`7"u K1&P'^0 g&X_0D^;}(Vx-c Eea޳(#+:aQ'>z܋vٶ#=w#=%|<ԕ~_rPT'}ۃd{k*>e1Ɩ[A& (Q~=or1RD#b0GICsYp:6NE^:ʍ&1$#48WL1'$180",DSr3N ⚎R|Bb56{ĉGhzfkJKq|7i})WT`nEm`@}e&0Gr2z ""qb"I6_oo%"%^}fJ"x#l^[V_s}8ƈSkH7{$秚@i~HRmͻSa\W3`w_07PQF22HX/C!Ae@5K|_;h H`%2a~78&AݺgG0,"i^g\@0]ݵ8Hsp2ԤR88 9 ! p:c)) ,7ecEŽߥvHk2qcb՜a7E.WI56lYF98 s=R]*F(Rkw&7q 2L נ̼\ӥMNkPvi e>|0ufaze @>,L3 1+ @yp֎2g_|04yF:(fCMy6۷H$ٕ:k!?U2B )B g)&[w` nߛ08FJу55MjfܟV/˿!SKS~nS|kz _^'KWz<7 D)*Āޑ>;ؘ  ʹ72Jp JDcjrcggZ_m<0BbAKfvR7k:x]w)~TmAk6=caYovg:>H;KFIn+xEza{ǛKTl &ō[_\$e-Fgy3C\9= pufJ_ńֳ:'{\&j|IFoy-Uq4v.վG{Gi"x?yz3ɕkޛQ+V, S?yT.NuIG~;O*hin3sZ_}+=NmlNh(h{ *]6v6{|*&ز9uܬz"(y:~T>A3r8EMexf4fE; 3*0Rm1";O.n/L_̆Vwՙ-`TK:&+yfۈ3ޡ(6YC7,ߓ_,NWH%?6B>e?o6ձ*Za37[~~e `:l'i4uluP~A`{!7_pe˗1H Z.u=9l )Y}պlF"8K ƓAO :RXDmߠmW6QGbĚ3<):|ޤ!o Kr,|o))ؼqSt8Gw. |~VoJQ;;3:e~g,TPfu;>N|HdE KA$K\ĘK4\KX-3YIx⤑iPb-wS{_bYGHUXRFKD1b FX$$ǟZ2Vs"Ga0MVI"1c@#8!;o>y>|Z D2D P_@?ό"qf#F)5B[Čౡ.\p($O23\@e Z_dBF8Tl ǜ*#"Z?H79mpVPǴ^6!>kP@y9K831&v HHGeȝXqos}0CBXT;@VpD-@%wGGrlZRX9X,'@`;,fhQQ `(1 $q:v2ǽݣǃ6$H$qcf{,IH1ԱT#FZ:NsHEOOn~nW޻ޟg>]L &wD0x1)&dp 0T ٻmlW8ڦM\k;v6-H674$}HEIe'vu$BۃC%zaZFӢ"E*D$ C tT "؎fFWou7ay\@6^w\˅^t@\,kVKй+x#H- Vc>g'Gı<PܱmWAZ L AYc ' `:i, ߋdgPr*fIEO|,ϘXfwqfQл+cY訃!UnX=n}Cs<%92wGq>c5σ蚢A!UtJgA|Eۭ+th.6Iliobgmo_},?W|ɔGwL sڛ1m}H2Ed9,NMjI=OVE^\OosGǵnjƩqSLO?:#\v^Dq@IoGDTW䝚dj|:N,at9ȰXxt+y$=‹OҀU| g>7]bΫ(MR9ί)| !+0XUaYaK <"П^}¨Mm]3<+,2qRL2cZ.ь s@s2HP=$̃j[&۞)Qrֶ"B *t@~2yۤP.qغgPquGQ>YC*_LD/^ʋ׽oz/κ/;?*27x xVuF nc+]X]S*L~wJ^^Q^"ÅɆfX4Pe.V:͡\驺VQn>9SJ*Eq:B] =J|΂xwD\rq}T1k#U껛j:ݮ_b[ʮY(ba1 ljݙU;AQݛև{{z͹6@ A=oidAϦӐ$|v4ii2'0jrzږt9:Æ,O)PXo'hF,r,͈ Z &;@alAʿ;Vh6 b̍YҨ9ߨ̃=:~q`0ąqZs=tvBGq:|2h8C,}eO|L~G bXRZjV#3űbs(}/8?Ɨt UoO8.iM^K6y&/%dlM^K6y&/%dlM^K6y&/%dlM'!yy!I#y$1$<2\FpF۞cۥRw1d6^j Vql7ĖUj g9deu{X )[K˜^(A^V]9ά+2un޽{) ǟ?&_ii>&W #)4 n_$r6:Hr2 p{DT u#hIw O4WBdcaWQ WA-h[)K^o4ևyju>L]h!&&bZ_rgCEăMHYc͊SǫPer/CԩqM^AM'纵x:crbi.|Gym LnF nTB!4)#Tfӝ WU]C1.YSdT+RlCDWN6ZAe =`> -i>45B #u/Tu-ԲXz~/6 JV  4O-S#Xw+AѳEw4)@aU m ݕgs94ISiu!c)Y^zk^"HL7L~0$v>l۝``IX.DKj_'UKDUy`j6iFZH J~C%@lZ)Ma-<5t0?ėYzd&פ K)5dʐ?GA|Ѱ,! "X,؄ 6GFc𣏕 ZzEu j1 Ԓz0\tu 4D/Hx^&f!L)yb [D<i8y0ij45Y%ZofM) l:D8wκ 0gDMP0 !ghG4D|dZ酖nSrief~`Za9N C M@wԥ;CfXHQi!i̿puf:9 qM3ч;pvFNt{`Z]_QX8wKb/_j?iȞ-$n1lUe9i 1m9fi liXmL?fK\܏jNT)"ñ8q s:$r=b3Q6-3}磠 2va$_dd]EM}bkkP,A.cZQؕбCRFnH"⏼:RkkZe"gc{|#4- i2Rb]϶m3x ~@-:%'f'RQpʙdXex㲆6^CAtt};G YP%e ]+gwoz`: ifۣ$+"5h[ν† CTԊH|D^a+(x_e`VT\q%ť[ӛmPpU&,!Tݝ[tT\5S3!K3FAb/NuL$>^86jhK-TjS[ &s @|^eCN8am]]Ʉ#:ZdA6o"8 \8p~t?}vau?{ # Ӻ o[Uq[TNѿYhTP^Y٢爓=\je7ϻiˮJ\.Z"ޮ;+ JJ\ ,:79,p69 徚-W!_B\&pu ik%*M+H_*lw!R(R3Űc(p779^F7?b.#fY溦8cy1_egI}J/}CG "=xf8oxm!Ȉ,[*um)WBOF I$7Wa "T`oA2 (j^{.&adDLx'rE!өnFԶ(Lvn ; 'TFAKmܷk٘LLF"d,rǻ,[:[ͺt Ԛ&wreP; ;e%xc L1sћsá`V=Ee WAy^t<=}6-G做)ϫ2>X_ہiE{G>Sw̯]VG?]755KjxH.hL1ƞ9{DI>qm"%^:Kfx /Y%3d^2Kf%̀dzdұ %3ddzHfxo$3d^2Kfx FVarj.tI,%ղZT˒jYR-KeI,T$ղZ/%ղZT˒jYR-KeI,%ղZT˒jYR-KeI,:I,%ղZT˒jC/pϡgl"6mx髄z0R8`'đϋfyJ_=}͵K͓+;yTo&;o Qf?mdI"3b=NBRֽgs(=FDwmw=# Al-x===-Z Q D#yT Aorf!cIĸ?:<,YP $ Q-(@ux<.m 1\+QB($00hYl`N' ,baJ$nzuPPs5w(/ɠ5@}_yTWɶ:Zbob$ƈ+Nm_O -n$ 2ȜR_r_mlW[Gu0:$IËV?8G9Aی&C5jjV`a%*5Q L V`@-#WșbWlu ൵]Bx@ NInS&XCo,nrՕRY+00k rE@q BP_PacoqZ"Gi moOj&d eP#WYgh:C g;6dxa#_7*: /0 }{h`Ck#D@CG~w/;ăO41xf&ez举uYݱgXNK߶Jvv S30c'dϛ+ri3ɞ"J%(. E\N_ Lã~Wsg?UFQMh9}H-v)u/w)pʩq̀GsA ;2YԭU|Kmm+/*[,js G W#)HRKSGbJ[$ qt3#X-DbooatoA_n2m-U.~)[j52a#5; m̍˜-wӈ-2(fmK2w m,$[ނ Xe +~/f Ԑ`mxpm$Քgٝocg\Tw<3kN3˷JR#+u]ECjŻt8Z]kø4+:(9,՘ٜ;,j sl|N⍸/*]J2-9Ó6]xrxݬ+XߢnqKڵb3t:tm}i=>Fy>e˵A翹Ύ::{[߹M5af9 Zg.⚪]h NCkuK:O+,R=7t\5%ֳ\.ͫ:8:>. ˰H9DŽRDT&F L "a`GE8Z)%$ %O&d{/C? OۻóQ/"|aw}>ۇӎ߶sx  ?{Nv>~xtrhã=I~LݑQYuYvL}sj}nhAY[;?ŏ]審LwHߊ1x۽Uڎ|(Rf ۫a.g.Ի[e/tL.SpM:_FI׻7?aMaQsq=wHĖy! 6~I`xQ Ҿ%Ҿ >^^87 `]%>,N.v $toB|9\ A|^rPe@#X>(qk274I>"d>fD%[:8m <̀2zhS[$M"Nʄb-zo<{BWz^]tX-5t|al?= AdMjwolOGQ}{Pb46r/&FQЛUa>{Q 苽<YQ?a֟}>ˉ~E|r?_BtUo5`f$en+EtXIVɠ.ښQ~锢fYHU-:6F.?u +'(|&s ƞ]^ eahfh0j! pQZNL|i`0"^lo(Lt-G[WSzycUHW/\V¼:غ?V}<L7׈hkڌ5D5D5D5D5D5D0BբkZP^C^C^C^C^C^C^CxHb] X3:\,8Y[J:Pg q?4*C8@3#)&-[{pFazgꇔHhocP_5 Az&0'd Q3i؆ZXRR% AB̙#J "-[Uv7'/#x[FaZFCe_\k#_|]`m!?)soXe;g_-V'=y bE.0PZq(~H,+ۑKڑ?DF!K&y ~sm5tXhpN8ՅnVj㥉-Yxt%Xoε@.R+SamCpVq]?V5ǖ9ٚcc1dsMudlUȴ6׊6׵o~)뮑o7rL*sƀ&bLcJR/G$"D#_P|* 86@f4׈u\Z?] X>?D-h4/]^3? ]Vwj(Vs?%@1D݃88 bm`:"Q88\ɫ0vELDQ`BPL1*0P uµD85Y#!o͠P5,depʋWMu#;4aiΗïOk.nƓ4To_^ElaXN`VYE}8b,Fa<M\8a\h~( "1(JtLosI?Mshf6+ q6!RrDL*"ݑYaF- 3@2AnOG'P g&+ou9iy&ޓuD•"&-$$ʘ bRĂKJBLI"e+EmVkiZf"aRDBIPjH"B`M UVb/2Cv"w"5yhe 5g=^D^8I{qj!|k:Ǧ4$ދtQ A]y?[].lM`Ŏy4FjyOޜ1y=c! P3@3)(4釽h J1=@&` YIC00HIeMɍ/!itZn=Rd2w3`y[i.AI%Sl3 !(F%JAUW5A)?b3(Uܺ ?yWT޹K_T݋s`ήc~5͒^z-3E jEU})s{&{jj Ɩ_R Սԅ&bl(]wQ1z4kfѿsL5JͭVlj+4uj؆9ϯХi@%uX3MJ9R2,͊^>8xÓקǧO~;=~ ߀cٍ&\+ \k{ooZu5 UMS^ Nb6CFӵ Y"{b/&9yvdZbW{qľ_Ŀ#/*hbR nM:_%Zd/N(M~R6\ߖW^^^XMjD}3VS*Ă>D* PC1nP35v(j ZzUr!z}˝NE*yC ?Bwg}no3yp}k]" ;?K{J3{+: s?=uAs$n촟H>B0|>㔲ig&wOfiW3* OHq8׺̝x|vIӇ4f-ghQޔ^f ۍƏ*QAk}tu΄P<SVW/^Jl(ơY%8RFh9\ۘ|/ˋo[/rW7ku"}cll֒nj5Ɂm7A['bKÊ&޴ܤv{lM$*$z&5Y]8~<QDD&"F2 #"[>'5Z ]XRVGZS^"Y>|%i:ZJLTR&)fz2Y/L=ReڿE+'U{9*yGecџTlhfcµq1Rƀ!f nl\!Z.WrZ\\GǗgUn4-)^o>/78V'ͦy QzA2FfFD?3WepYn Q;_N\OS|sѱ^~^We N=Ek%xIzOC-*0Xi?tY?[zWcMS,Nfwt$]=@")k"u'! A+2 _juuZ=9_XZ=[YI'$S~@"hIG 9 5sakBQPt,tDht̝aRB c^J (oV|My͓%jWV'0%U8/}TWbymB3~"su"?ly-N>w0,aIx,;v$(HK"B$i*e ms •-V?Di[2NrKF׮IIAi@Z-UT%ņB 1]}RD3єC:i5#A2+GL4Y:P(Pi6y k"y$pF!riI%oN(" FzY 5mj5k;(T?Dh,@|VbyU!,$Ĭ"ʔJ%jF5TDJXUۛ9K,N^Sα6yE]v-C#MJ%dY. /P88a6]Hs]~?0VՁK0}w]!hW@[<$i4o>A'{׳8|~6_;o9wS/~6L4mqI.a"dcwH.iC)8_~vFI{*.&.'踻 0pf0WX*1&o߿|/^ VMVc3E/0Dο|˿utiwG}%Z/&&_V[Կ-j15)$] 4bt/XO|yrs=_LfƗFkԵ9: Uc%PC1A5n ž r51B9O E]9+b1̉4g$qy֪H|/d3QJKw@I6g e:-ҵy u_=A-_wS¿]G`WVyF2:TpЖ(y0ْMÏڇg7,*Hޤ}r[;٣oyqXE A7.XCZMn6[/9eMD>_aޕSOgRϷWf, *x}7Gefv/Q'7ZJFMyQYMLjmE6fm}Cp_*yR_̈́y]n[#Xz߶PR\Em}?.yHߏߪO^,B?iсBۨDC-N4<k}qmq=L)^U-BrhJ【>7kǗէKzJ}pF̞oin2Z%{C1&kfcۖM+c(>o 8R*yCr9=545DŽ(; Ԃʆ K*++yaj>vBJQk뛾6B{O~u5)tKh~&iÖD䵱X<F}%2/wωm~gć ʀr`46w.pp}u_\7z!Nfٳ_ɟ.{he򍷑Ϟ={E4{>Z1iYC V Uԋ}F(hεt DZTf Z֊\ZF Q)ʕU+lyMf+@˩\!J~\9M_C ;+ͬZa Pngj]l%ҲΣ}^v|Nh;i߻#]ʞj_3 jHVuB,W'(WPDCr•B]g:AF8R2(̫,Ւ+\+rhc+D)YNPVJClF0zr(7V[ru:r,6$WXsތ\!\Z+DkG_\\iij'B"W֩@4\:E2Q+ک]!v Ѻ׮%c)ʕ̾6C ;++]! "ʱ][~]R* 9\YvBkّ Y W,WQ#lH4B,W'(WQ+]CrѷWV ѪGWҨ\\ GT+PtpnE-vr(9:EmH$gzH_<$Ю[e H`8/&ϵ(8Kʴv[cW,lOW::Ջ+%EK+FkEЕ*,8bQW@_]1;kY]9;9uo/Y?(uN쇅6.Ӊ$ Ewg+" :^Zzy7y8Eymq h+긔:u R<NJ&hݒ&`p\|1] &:zڢLBWϐ&[ՋYh_]1J/t*/ +s{Ŭud*-Z8]!A>z0=1]?햘}": ?Mz8]zLFa1tpC\ ]1'ڴ0AgHWDW$bFZ+Fi3+cY bI-==@ ]=CQ /dO(HW8FZL1hM,,@E D 213p@|muy̔tG39..G^@=&xy~i+ }Wzzo_` |.k_ԳqĚ.N>5]ȗdQS1wLR6o;|ק{ >m}Uo~eWӋfGT[=›٠J(+]TRt^G?.XݨJ 4!jM.Rap`#;j>Ǟ5Tcϸ"zc?.4(a|m͆>TC@>׉R BĠC ⚳EI6ERȜhCwD*H^2UtF"k;i\FKa(9\h?MMJu}~~sI5Kc8W[놃nV6=\dr@IlrlJ0&Z[@,ynZ{B4CcW}BۘR@YVc|hkឈ&MVjs:N{djmRJQipF0Tjh ѡtT@WaFC.XjGGcT91F6x\wU}pxϜNQ"VWT$|羹<˵U 1W\騞:DyKT}7Pmpޜ@<*˰^\sVu0)=Hn$*Y+1p_c;M)ɇ5hݜ|ODLcO1Ƒ֑5~F[ko2:dT&N|`5 j"BRc1uߺv)T`1ȓ5/VSV#Uj(UIiaBKFnYB Q|n` `֖%EC#hdGhϝP:K+4[*˂whBJf lt'  EA)G4TTPtԡ-!xq9xi@yӧGt&e¬aDm7XuME Šɋ:,cmLL ?$ܬl Mk1I YZv˦ A@[OU. )ٰQ1Tkt d{92 lY: RL1 v[]e:U#Rɂrn_yᙪLHpm 8R, |#E(0JSZ24HQc9uF2K&B@F37=5(!Ȯda@B"7CA,ȸCSx\lEP C5P,JLhW4 K~uUJy>:tAZ -`yÎWVhvXW4': :kH' 9x-?R1TDyC*aw(沨Hil֡b$U;pY/x3#:@*mKUtmTƀ4u ml7be߁tfzPj@H=([ 1jGb+Z{OkSr睃 )7\sa[pVqC D OU!:ZS^GàekJ bdzʬ?-Д nFU2'ѬFDp*qK^RZG7ՠ[ 1kxxpQi4 65fs˥pUv;]CC,Z*cQ4k&1") %;)Z-, \g|w BP}#W|cAApԋB|1֟z[ ]+eV(ϫZ]a@fm7-(N߼ jV^*)oWu:#ߥ[믿} /vbd]4E5n^~|):mbsrO k>~X_oM6ׯ^/xYf󏤮) no>m^xuo/W~у#>Q0?Ѵ_1=юGk`!1<ڨc4J:bQFzQwneu~\,Ψ;4uĨ#F1QG:buĨ#F1QG:buĨ#F1QG:buĨ#F1QG:buĨ#F1QG:buE/ɨcxAFuFsFnR10ɦ^\jW6}LXJ8.8 ՖjqpجC$x5F/ ]Pm '| {1Ҡ'seAR ?YbZBXSe/`;B)q/5R엇t](YrHuAǐV&Z8”y Nj)椷AZ$ؚFU s؇r^G~ZQ[Y-LՋ43zq])۴.eraїnؕUBHCF2P87!X /p}(42"yYWjmIVsߪQVK :>~Vlrz n[4c! 4ટdߌ Z_01 6 0vx"waLbm*n*a.]\~^\Ri+-<":a.+I(7/r KiDj pq) 9X:cM7^piC[pIj@B˅_ цV#u*t&&YvY( =DZp˭#?t#7vE_=:u|:IǮΩV&#$V?WyoO$ˍwŸ(geL߇uB>ܴq5d4JNE4sOwٚjF)u U~`Sﮂ\ )E$y#PշW(nv4{8f$v1*h-XfԖ~jE.)_'϶=dPĮzR RG!EyW!h} tL*H0 PN@k#73/so8&_N;F/vg׷A|Wy[,ږݢe7Y+ߦ2V~4l:m>ME<x4x4VCh1Gs ~bI [CWW6f Dkhc;G#].n9;&!\֘x1}`{B݂ri[B9#Lo!Z{t̒vCWlŦg Qm=Ot\J 퉮ClW]6mz*Atus ZC+DIMKWGHWLpIDWXՍ+DkšTtut}X J$nkԕut-]] a% +hsʃ+D1'M]!ç+DmKWGHW06ZmN FlcWHWM •qNWҴc+0At%獡++dS *~t([g(*.j]`lc 8 Qjճ+bQJ@}={cn=R헮CZ偩+]6mz>Uk65.M+D+T#+At-o ]\KxS Ryt(jkQi kZ QJҕК@}-4M+D3xt%WܹE4ݜ1^*=30+!9dXoV˚ʶm65IiUaRϝ] *r mvE.M{Y/z\\ήp+Z<'P*?1gCwI,/̿ \ *`x˶= njTwfWͺb(\n xqW]x8o,ZyaYN 9^bHڍ 1A'>8n=W+fbƣiWwUozf!N*Bƌ4,4i OYm-y{8 H.WUbh 8@{,/q>~x?85zB'HZoV.{k7\->9`~`F;s^JҡpKCj:E4lnjh; Kd똿Lc, '$NBk%:×Z ~6T`%>K]!F=-ZW1d8?\GugܝݫLx80D@N%S#q.^X=FZnRCJIF-55QQ́åyO# *NXD8*LZMmiXzpEGi5R"5,dg.ѭ*// umA$8j\˯0g@$NUdp],s)+|gI~$zo#Dp¨Jz7p*rB)'$WRY~ı*69$yBr :US)WI >gZKBE ‰XpkєjAG |aZ(`<p p}fj4}wU̟3M#4*R9ˬViČ)#$&Z%JDZjBI#Yuk{1^VHZfVI8&$M^(!Xsc=$a;ȶ##->I΅uah?L|Y0ѡr 240"rঘy`t\ΐ];L|,r)w,%'SLv(xр3%A R3$s8<|aCWɤf_MW0Iw|9V/LXF"wd9MhHS1 Ɗ BVJ#`L"\Sqbr*6%t}eiw~Q-u[ e\\.Ung!r*'I.eÖgS~VMXCGL|6(8 jkGA:6eo|_Y+#:-\QCn-)X86^_i.`PF&*=*zYy;{ 0o>?_퇟^yxv^è 8q [Cxmp'9eU֐4knb5n{hXS矂_Βzy<_OC .\(^bEW@fYAp plJ=ĉ$`4i⩣@yÈ3*J߼IwS"U}(IT"*N}JƜ@+DkΤ3%s)Q)|s:A]~㸭zjOˮ "G0H(YfZA}w|j]-< Vѫ;*+n}>qIO@ι!s^drL-䒭U s: G!n!;,7l%CixճR,DsN_G6eRDk|P@)+GĶ[֥Hmy CWW5uDkġnDXKWGHWFYn>A2S?7^ O ?i]YFygS&<toDߍ)9Z "WlEcCl "Z\%-!ZKn)th9tBZtlJC,/Z1{rgZc;j=ԕXDKW6=J B4bM+DkȡLftu739fb <kbgXcްՕH0RJ-Bm$$19 x.F*=Oh<%Euf {4.U +KHU҅YFRi*$Ws6ʴ<-_֕TujOmUzȕ啑+O+^<"\mJHX 2(ka;b"n qzG&; B?W_?: Lgi?/B' ޼ 31g]?kw{A~zo,oR>mj"gOd,ԧNW&)r䣳 e 7]F°ہ{OZ1bnu5޼[}8hoKW(f?R8ixhM'MMrOl8hn[{Mwl 5<򼟞6z0 _zr…46R3q;f;K 8# O<5o_v7Eh0A9S184." M  $1"1 .F ..\w2^?b*l3 دڂ*VcpVI DpC@aIA s$1 KE{vx~xWf{Čѽyo[j{~su{so &8NG=(!Ȋ$ <88*VcHj+qpX % >AK,(Rr(J C#)[cGehy<8hcc2aL@ma1^D[ Or$bj_YG?zˣ֫f1rzÙ7/Uan7n^m]tkm)TCkV2maJsD-4b H, B)bXhQK CrcJ8d"N@Ƹ%JلDy}R=F)Q g: WKC$W@% 9RCw{0AkxH\+hp,B($[3;:k$ZzDih>pӸ|;&8i@m4~jEq6/k:iqWM~8uMnzp/s ]R@Ő9{.|1v:}wzyv9׿t_e۳A#(p} 0: 휚?C/TA}9:=ӿ'ZΩ_ߩ8pzvn7 Y܆~c¶z݋F"oo  ~zfQam~<7 I;kY د߱YwkbxzV-[#UE8Ͽ МuB}, u:|a,hiu .\qԟҹyh8{ao ;p{eȯ pxݎhl'{i=9_+1bpϭu>A/J~1t?/ tӝ8kˑko~@l2s_$'F&"Wh01?l1In,\E*Ɇ-L`IR!cS&:g;I1lPF#ʂs>|t rW LxσCs|"\w.WnqOa _aNnv7m䔑a4xR*ޮoޮܶ𡓟ܙoW)߮|5ɘg^wy=&7&.wHTݻŕi߽{JwK8Uz)DWIDR"WVE+O뙍(WR1x κ+pUӪ_&ZP ɕ_1MpD2r)e*wJk*SY PexZ]<%G\mJ>r iC4{̓Ry;rr !W\Z5=֜/fߥcÝwg-Ë.WZPRrȕǝwr-\yJUQB;.\1w-;h·<µ\-\qD \y`%*#WWRL]<@\-\q&-↳FcQ<&:;s)N];lw+/edPm0{ee8#2$d0sDҩj%&^X/ej=Mϯ \͓aRhuDE( שJJN LI0Ac x=(Dd^|vzyvd';iz$(0M |2UN>K>o'ED*@c4F1 R[/_Xy3ow%?Q (*g (E'i~83zr"GWi":䫏pA>K*K I5-?f, !h _tA޿/388(zA`w|Qz&O_$ϲ1 Zzz qDUuhX)U0`0Ll̝ՈF1Lh~,類ErCA1aGj\$=>ӕ0~ yd Fcxr$xxrdo l B_aj+X}&L?8^1F1ߴ8kq`z BQup6hy'8H|NV}Rt%=m܌:Ӹo]en0]rbi ,T(cQ-ro<^ypj~`]ٙkj٫u=.E^ki8ilV̚0z\ڋ~:8+N\owzq!dO܆8;胺LQnt>_F_v㬟]u8 *d0Ą#"{u1΋%/G6-W̏֠~\Xӷ϶XFΪJl(+TH=ƦBNVȹޢ5si6;{]iY_|4}R:Iie&o.L>᭚74sNcN¯K< n> pe> bLNAʹU h-v-HVfHf|X.:b (lj ^ #3!]1x wPY8?1DotnSueޡPX ᝉ-3Բ{ kj ~N?`&vG]ۙ` VOR}lCTWyK@sivjJGWߏR5^7 \]F0+]F#tl{ JպSM]RW@0ز; wue{u!͗ 4SO1dz_܅?{v0k۳ Q28_aw;D ? .7!O  CEV*DA]"=SuD-x|G5VB8Ფ y67:Ffp+r~80wփ~;tC7"C^4|l1\Gۈh;}"9pib1|Szdy0@3oLHEY7xARh}dڑSټ([f;|ӍNG ^p@~^ V}8ak@9Y4#.N7311ތ; %vJV]2;J|TlݡH j~XR>2! 6rо)nnc9{`'6 }LE=}A-L=ݸ*q2tJZ NdAߝnhv~? 8Ѱ;&[e};JK>wǤ9)NATڿjW 0gL E @~$=̼ )chW( $#,6df7fgڪXs6,s1˾pfZAafmt,[ rՆDM =߰<֛0Md@̦œ fC3WDb*bJ}ߦQys"H[t*I~Ϡxa1 _)< Q!^*y(LbI\%ֱ>ct\rhEYMPd|6@UVS w (vs J<)a$4#A@0pGJ3Q=UXDǾқbj$r➇}1!(2ʼna4*d/}"҄+DqP*13UTKlj Y4dwgl@{~]+pe(i.l6V,-_n/ ēT4o>Z޴AÊi/7E}-\s1BKJCxDC_8a\(X~'!"(UD9$yAvF gj63W1Q6&i2O`J|KfBlaXd ĭ@~ЉNlMаtbwX?%!\@H+V"" (R>JqHB$߉{iBX=|őOf %@JTQs R C{+s8&KUgtO~-x\ka"Uz/N0jwB̪|"O r`9W`H;>Õ"d_0.s#fmGV}6njM%Lp<oAYeCc˂ӫa JKgjL㷝u瓹!u׍(xa?8 NlӴ%NNu[&-8@Ԓ S8>-tS3rQ2.m;aw ӓg/N_.^\_v:1:C*1 o7"0N^Z򦺆d]Pk|Er8I6zҽCP8d6 Ĩ$r+@9M8ḉ*(\`D(59>(5&*DXTS'l hհ*n8vaHm(`0O3`쇘E!RצhO@8~FLE`o^)8 D8 ,I PLb.Fm{G2Mڥ}L8 eB!ZtoMK|g6XÓ5(CMXI4C1ǦQ,XǑ>/΍@qPB pP|+U#Ĕct)r ݢH"oouơR 3y*^_#X:  *pp(DBg\vUa|TX*_J1@K LrQL;SjVdv7ڽSKƒW/]fb/N-ܸJ9u3v}(})þRG1qr@|CKH!bRJJbT>($y{}p([+bmX.vj3('O߱jvFbnn=b P 1/0ͦ2J/V* [vRA5{H(pk< \vuHKxi]H!4OKl8h[1hڵL@_JB׮SqW1\ gG_0sӡd:P4:I3 ()- ?@:mCGE;75o bܙݎ\ փ>rŸcqȸr?DŇ&ĚIcWc,D xܘp{w43]pZ0+C@$"1+i"iXq9i$`Rңqw4R}`T'M}g;[-?[usw{w0HPh#*ĸ9̪b0k l-!x')Z*rV-<%l٫!1tߐ;փUY4vnrNiXm3{sQ AhS(Xc2'Ttڙ rg2n[6 0E q,=y,P<ܤM8HfAH~vS+fMf ۚ ڳT1Y%ZW7aUm4wٹ+xXs뜥ņgf?0Z'wjƍJmk-qN/>9yKzzPJ҄C*8-mU(ʠC&՟}{Nb?8V6cm4d& fјi9%93ulĶFM]is#_4ETdAo+!.r{0o6:]q@'+FxVc.hgx{~߸n^vR8wk95,4ˍ.VnMܵ^a~S@%Bvĺo Q{MdBτ Q@ϭs=|+K sIúRV '6&6ܜtX!<ի]]# F&t)N"܅\3z%v"=biܪ| IM8siDS+R* KdS^:BOg{iD&uBG6GDSkD I*#Fi3^u:,4pdt|JP,ٙ 0C.ٙ 0C-A`JAA`03v1^fItOL0EQUDo_rdA`C]-5v:?59*LcՂ)0n4y >{=mՖ}{P\Jn4i=q~7uFelL;WFeK,)D%v+[ʄ$:3ɂBt!cs$k N͙N @Gȱ坹5ݞ>p)mYze +c^h+B̕:TBH}ΩZ(PHK)kG_hoGm=d)ݴ#'MtbcIa mNƤSڎ6|} `GSvxs1a.Ug21]?;3yXK}rfM&Iv\;n/lJbk*cѿq{ΟCvcB阄:T9o_eߜ:q9}[Ԭc?%`^^>8z~B ]|3Nkx; f=?(h;18?}[KP37nMY֌v"2>4pF3zw7v\wҒk/Oہ=o4$w8:@Sq/6ݼuVfտoc?_MD]g|;~_;] SH+ZfbƞAP=-ڱ>㙉^QJ[jKؘ忿ԧ4D U<#3ϢN֤u`hKS5SR1ysc#LQT4)cww$( QT]0u!8bꧭ2C 6>"zʎΩhUVDq5VrRI2KShZ/L2`x ڔV7?uht?V"ODTUa(ѻ6MzͺA :7̮q`pvBdʿ'ώ޿=xv?telm܌> U% U^ÜŒ]JW"e~O6kɈ>޸ ڂ֔ djIWB\zz܅7Q:݇^7끁lz`,F5<'xח5/ P:>`uT*ZvjeKt"+Y‹ZQ5/ӛF|Us?9]PKB~&:dƅb˩ 3`ދYA0VW>U~?<'F7#Acn439+Xv7 |> 򃡒Ƀ()`)(hF2gD@v$>#Gq.|Д%`)uܴC:h5vD6e>. ެG͝ [uNl3W[ƑmLR~j iJVS?6ϛ] Ա|~J-vQq1`X-x|!$m//+U; 0R S4 w4댫L>4Ƭ.,!v uNws̡|bE/m0S<,Az}:+p{uvE|W") Փ{:ǥoCMc7c7^ ۋֶ J-h`g`4QP\J뭉o?|1+O蹝7Q0QƯޞqsc&5nzְƑ׷mR=E"q@g ~J÷ۇ-՘ 7֟C 7ၡ?o׌.VY@ߖ5fK4sV}2a<f *7 6OvE{.@έGKoP~+cQ7zzīJ%^u"zu.S'*ޔɯݽ.N] ~Ѽ;y}t!dKI›ʮAOr8h/.^d9~&NAIk '<0o9HAnt݅\Ǿ,@j~˗A'*#Kp֞/?&o}b =5(FQ#n3,YlrJ_צAo{(_Vlc)CLV^*4lz|6˶{F}=cAa~3@e?X׽Xmpnnia5Ha[iy+G]X['/w""sl9lN#(U2g]N#(jmO#(*9=FpLh^P('qMt-#[JA? vCc˭9Hc%o"ymF-2*ҋDp+sϮ'Í[[4Er)Y6Dv:j^;˥w:M=SEj CAs|<{Ar[{"9՜`Le.YMݱ^^:-m _4pf!6:-M";njC՟rr(%"x%k$l_\ { P;=eD7*u;BBwF&oۚ؈ab-wߕ_0-c:5⛖u Pb:’&k2,ӓ0=Y̵C1ٔ\P%K֡Ğu1#.8t {6r];@ަ4bJs~u 9@3qQ+uܪ]ni\pr0Y_u҄u"%_T)j˓U:7?S``E1K{q,#ޗ:bSJarwiogWͫgoOyLـG0mInԷ6TB4iX7qE|:>찐~0G(fc8%,JlK"{ |JB͉Jڳ__U#a*FfQUP߭4O>^5X7kQ`m{cC )LƂe:(ۭ־Gk&, CFȊe?ď"R̪,LP[Or`Zub%0S e8X:{% ~|nDp CfÀ" n4[ze~RՍJ~@/<㍯tu#0nJd'ΧA) LmZfȨg"̚>QIehxfzxV7F BE2yhs3fr;&H;=ltLamYAGϝE;>~AyYG9v[+ qN㍯sV{<}TU^⌴Dufnճ_yv81ӸDGo`\.Vk_縫_Ԯ 5D173;'b[xي<-ؼ.v88nijqNP .e4(ZBR'2"b+ 9H<{@z9.FV99bjS˴f.f4Vjl, P]D`%nx]T|JISpݵW?uoxu:xhBᦂԹL#Qڌ+Li/LKcmò`<.|ЏT3$ݬͨ8F/M=f\qcՙfE`'2!b#k g˝8hu^V?0E{Q,wVj8'jmPzí԰n][.k.K2:Iw=d;ٜYژZCfœ>RsUUA^ ǯ᫩ts pngo:]% Q0Y+.os$K#GLÕo*wbE|:jYPj r!B'a;d!a'!<' n bms6T˷53?kO)92'[@6[=Pjt՘a:aQ]9̫̋,+ ȴU):1] $vi=:ZXkky>7HI_54]EqZPdQ,3ѭx LfF<0ꋀ Jb@88;Tbǵ8e.vXXؖ915=Z. )1!(1gpv Ll/p8V1$[:r9CPc ɢ $v ȦT_뜉w%okw_mlz.7ͫ[w7Pd+9߭$=p&nSu!F-H[p'8Us݂YyҶl H:\0Rch6K!bTIǢ8:1`!它!(! pMGA}Q *-bZՏIYVYğ{˵=%nInG\)C\p^) ;X¶zq6 o}UiE˜;6W[#R@Jl!mƜJ*^7CXEZ"Юd GeUMWVx}v0qyU Æ׋ZA>Br ќR3"k$_t=]T%1*Rʐu/隅ƿ&\;zUW'p8ySc,4LMD.( @v:hܱ)Y&^z'l7{忮` Y=,`T'$g9>r\YCm>50$ϊgJdk>BvΛD2(J ꔪmH .g˿Kbv۝*&a d#JwLU)l!AI p!z` mΩ1MFq%Z𧞮yoŅap6I i^T_8lE9o)vhD-~m "3%}P]OخIm] &/^Ͱ"2N*.*Oĭ٠Ϩu}sk{%hzGZh Ix-Iꫦ{دPֻxI>3?aTҨ 2GX ?\8ًãW/ϓ^|uv g' #>;u$$|ǝqv+.B쁆/F ׃882_>Lj\Pk$/Qb@2_q`+@9};e_4Q9=8~_K8Ȟ qC4uUb !Fz=uM)3 C!%Qe ˅O 6𳺅g!gWXҕ(NׯoBծ(wP\"$gcC"QC =!q)sc =в7AݪgGJKhjUgHE" ؅~ qR)mC=lEjBNh<ß;r$Xv챂Ap*:+Lf3gr-ۥ{\ 0F3y{ŝc]`_w4r۸^4H/v]s6Wpwc ܽ\\ƺأ.=bأ.=bأ箋=buG]Q{uG]Q{uG]Q{uG]Q{5uG]Q{uR2N7K\(U:[0*A[7@n9at}2EY64~m5S'4LDZ# aWC f,t,1 nLSPl!HؚYo$g |vD ^Rhh-5-Tʼn%K)Sɘ|jqU(Ui*dŦ!7nM䞎u,z/cu4j: N(PHAM1Bb;& hcD|..@U 5 ./eFîVW S58JoR+e)ԵSkSӝohX.z= `'R#˫R4gTv]K5U6Z5.0-7F[M,B/ӊL<3QJF-/D8alִ8JU+掲) u-@ɓ85lA3|eO%~'!6Iӓ,KUPCz.HU8 WбLa/>2hXöF4bc6[ ZYɩ7yMDnniQ)ccejWŎ7^6eZCzu)$&kЙ۸n_JtDhHz`YԽ@Nmw1g8K(#yne]d .ppm"0F)D3 )8Ī&]q)x[JA3٘VFt>Өt\ۃE(|D?P2|)xZq ,/#9\/:!AfVȼ_PшlNՂDDz=$35=Z W\fmШYǬ6Ys⊉c VM>$Uxᮃ>ܮs'ԮGX$zԯ+ CH ^} .Zay=~&nBE:N*t$;33':#_Q,3]/E4f?Q.e\9ߊ]Y,ۭ ZP}3_uz]-o> ?|b|;p]bSA,8!<,T Uue̩-1 LWdU" (UGyl"ʿ?:K ~ƃNn$;N| ɅRz~Rɬ/8q%vV0Q6  ;O}\0vKp*#8,%C*,J=(A6M W3ĕ3z 12`aʜ|9&$_ftfYɶ)գ3/Wkmǵ̚H秘&kN΄}fڇ}ˠiSy=q | qQ\AȇЋ2uΝ|=$$](4?ȷɷxRr)]'2~ @pg C6gzVYHo"Avc1d01pY *Xc*ō]א ԆV-$>gَMq}'=Ϥ >6}fosJaN/(vTü.{KzRBf_klm>R/ MpU*mlޒ xA| j@ÐpO.vmCxR]%whFٲXSYm$O)C P2vH7GXe[J.خ'\$,x!X&eE$ M*d r@kB L1SV҃HSpA& B'0X=p= cbt=tYz^1a8y?"IHh0NA]HƖ3sYC}<^%McR(G&O ObsRI2IsھX4ADHiq+e|gkFY+l{oè%+'ar^H4|/Q+@ꍏC;|86BVY 3¥S).@ӌuրḒ]kV*Jxx)~nG-Й6@\̥Br S,Ssh&>mJMt]H$!To׫2u]Ԭfl31웠̫u~V b1'm $sX}M͵xMGBA@<,1+ lc[-`!2rշ7dլ=:skc\Ҍ^wR{h֦ܥ*_=X8/~Oːo,Crek,es+6YrFeS 2"gG㸺tL=Yf*Y@8P@t 5y8+yr2HDpi_^u&lO xy^r {lݚa֛W}\ӆQ@UIUtFP]  ]6rR^fA( feM'Z2"Ȳ5ԨpolEVrh\Mt]J:(뼙9JCp jx $ $ll4 F!8mm܈nx4Ui@l(/ոS0Z dyv$K\Hqi$DB"я{*;.v}#e 9Zܑ"n^FIxIv40\XA5]fPm-ѿu kF Uj 0+ XPj6ac㺢^_BbU^287`p"kXoܷ 3+L0m߬cߐ<֛͵axM0blm;\، W,$ IB)T{幞i%3h7_\K5õ/x,K/6.G,o5|6Lzh rÄ157ӵ dfk;,=ߡӝ2qA?jutQ;߽ e@f:0`]Ȋ+29cfRoZV5Ͼ1Z:) µ٦kNE&Jd 'I2M|9S@T>3䬘$lJۃ{đa@}!%f63 0-ϖ`Ҷle 3Y<~QOsY}60R'c_w&bD@e d=~:fj{k^;ZS܁M;7E XJ~\+nI9D~.?02R*d7Sm25l^_;>ȐƗ7yuO'goZgG)QHRl'?!y x'8@:Lf|d%l(nnu>< w[5\y|.Gzvrmlw[y{"cSMgWNJ藭/[n *=/çZ=4%^O?Wdr4¶ {YpuvfwSYL4޽d;8aC_JVk}Tpok&=)]Ƅ̣=MM,þAHp~4&{9t,DqM>FWQ|cͤJN]Aa==ڕ"퓽Ĩv.ŎGqcُп)6~^S>\Р9],[Cܜ{aYk['?ߎ=~s=Hi@SDI\aFJZ&3̄Fj-Ll>2L-N!My<Liw49yoqFfZ7CgɄ6OC4< f6ڧ>2=SrVKޕ|T鄖lǴI퓙EHoTb4Wٞ1H b˚1HKfThX\ffH|T1p3KsIWUh(Vo&+t3  n;ͽW.nOԬڵa[~Ԏj8\;i}:^By|Zt; ?F(.2[.FU/ Jov;3銔_NϢJB;`o9s;BOOBO9v{vRN΢2= ;5";$. ٤.μ]d3?:{<X&uߛhIkUtȻ;8\Ư޲Y8vM52frʫ mPrzA5&c`ܢ7՝7%6v_)-9u=*'d>gǃY,⨓c? SyA5u:۾a𮱷h6|O}J0 {`Ah59^1LF6]paKc,R)qlHꆄ>3%=<Ʊ\pl6f/-AuOFV!ڴ19f5&uu#f׿ (cE'Zчo]?AFnx|˨FYRd@&#:m?Z'ޝ9:. &9'P pGtt?zvx b;yj #F?3j.?U9K E6^H*u m(%RzUR?w/H"dtݨ}C}-}'5C"$EoSX.Sp\ۨuFbn75tx_S y@6%2ޠZE1(5 ?W vbw2}!&>R\6E+  !?vh`>X2GdKuӿ^]Nc_ǏDQ5v6=vA|Cj¨޵0m#UR#BN8AY~kP(NTK^wE9u,x-4 @lz{~ͳ G3cp>{ϽsG㱫k/ǿ Wxuκ֠`kը ?wgsׇ fz]NVn_}j5\x #stדф%kP\AK~q ¶''o9,OE+WqWB l•[Y`'|2!CM=Vk@pfLEh~]ԢΠܙAe3~5Q\q3qC`,\j3`@-iVi6@: Ax_M@f ۶?p ^c@F.~* {rV`ܾkP" =[A )74 788tm5]0\x*!$kͫVStdͨuč,G yW7:BS=2::zfu•TuJu*Yc|-C $ WO_{EӅ];eT]u~0B۝_Ĝfjo@T)ݮ.:i\e9>崈Lۅ@d2r+uQj h3 sx}wd i+jvR4h;&V"VaMnbk@cq \WQprG?έR ߚ||AWgB2&x;~ -sw$o.NCuq*q/8p@x{Cps4BD#H:D#H:D#H:D#H:6f:Lt#HęXێG+b [gcFЭŀΌwe1zڴ\㹖9[`>%#…TXʔ0J(i+R"Q8 iD M32;sY49!k`p'ˀQKd\~'_>(v&{ez܆ LH F@&_eJ"$TG"e@XETǒ2YLdD#F j(JP"92%Ƣ09~5a͸\VCGHp)#^h\|o4/]AooO=DzGЫ'cxL󏏟~`7ώ>~{tNjwϫʽYTm%$VYj%zfQ|ParT \`š I}F-Y<`£KPK]HA̟vE;# .c#Cc)C Zpr=6Qq"'mhswY*tÉw8%v Ee΄.Z&0Y%0˫C1 .+;XV`I_tmCB2nFKl䍀ZM3Ʀ!URiY ,R{4磳{27 νʰ #$j`[oU~y}c* |t۽lm</l%t'|YHHX5lnr \!' K_ٟYKǘ?k+7 /?!y *oe]ԧccKf6jL8Yf>ݸwx[aD^? Y/,leAE2 9?Mb7P*!B#TD3ML1y7UsYfUi*Jo'A38Q qXD`K3?:BBSAñ6D%"WeS0wcjt;7bid=1A,ZM;C9|R@o onqv|;msln޷?;,{ee7R{N!zYߑ0uX|r 3lv 1Zfxۧ{]+jWX8L!9^֗9:䗍,43̡-Om{Q zq+nsI`P~t8 o7WAR#&|!C#[CFbT%Su?.8g&oS2{9nN ӆUKz)m^ޡ8#f{=/n^Js.l]nj*MPs'mJ23@ZzaXd&$; wJ(aSūIvl層իjg56%݉{%χb@-#qb sV;>ʷnde1x:}n_+!a^㈜D.dS>1yje}^c3c+6jk vqpsC;/p |2S#WXβ<7R$.~/~ն"g4`)$TƂd#"Ad<EN%T8 'Hk+k/%%\2N,s$#CS`8c?C UXf&:WV0#(L;01K(&zGiy[/޾`\ k? qN&Ź֕`72* ex)nq]qՂQc`bҙVgpb#V5EQKXRL!ZGK!`0phpV{tHbaiBLvL] ::i;U\ʟ)QayX_-4`$_Wj:I-zyl̲hF)m eJ9#AXj{5Z}ILm0їר kd;>_Ӳg,;ɠַ|:iV]f-ۖ X&>})) o>(F@(0|ͫL.ƱG?I wfY7ie ½ Ӫ\U{I*YO!J$M4Zh !RRxaŔD)nsS cMB-q 0BBi*Q1j#HM BgF|_޸9a$fvekU%}u ?xd_Z$?7RD*,GߊXUa@5&!:`ѿ?vC."|﷗1RGyL`< # a0.J J1u&í<\KaY(V5iatHа\Qr{_}(Li346*jK݄@Mz+ׯ/a $I~Nz ߷5Z#Et5omJ}ϣ=UoGfếO+|}>xn.pj1x'T?v-P6nY6p'a!43*fm4OCu4ч鬐QŤgC qxyhsL5JYM6ڴWz<~G/KC (] <;r, فA?ȍ@M?}_=HouѴ!=iR=a;SOZ񶦆ꩩbS!'ȡ7v+]S?~3]vv;O9kґ/q$~ʁo˳q>rMѸM\Tdp60_%Z?ҞQ`C|!^|uO𑞫Ѷ͖̩l㈡DŠIy 1c]o:cd *0wHsIMԜcfa!EL`ڃa#1.h-(Jed8`Q$;wtY꿿r\C(Qk+#P@U! n0hnŽ6Brۨ>60MlFЉa9].M7%n#}QT F)ͰgdUC֦Spܛ5XXpsolȫ5D"03L!2("i˕CӞzоT|ݖ_U^D r Y+냉=ʈ$` Xy$RD۸^].w旓rWnhn|g9/_~a/ӘG"hʌ ;prtۨQ1,%! Ώ6H|9/fel ~[m x*0iτ1QD3MeOJa+IںbgV_yւ:ri} cNf2jS F4J->|4բX O"1),xGuS 4oxѦ`O3#rBo_0y>P`B<UX-zg՘IDk1nͭ\6c芓_NS/=vJ c&) ר]QQ?> IKB9U`˽)7^/l"{!T{=i0,o۵~CVώB$&֢¡U>o/"GkQl+brK%z#32x5sf"pa+(NՊޖw鴹qOOrWEb|!38_2Lhdd>OR$* %^Fґt[D"rR!A Z3ȅ4*L1S |@N@kC{ ҴrrbxlψǴ @ QzHLdu4(%LR#1s}C<8mഁӺ^sXL*,pQ'Uy!ϗ}_wЫwNٴϣWeL lqQ0X,U(lX&%FLKw x?_O@ǯG% ?}Iv\rPB2Bzf,8*DobIS2tj'FnN$$zHZA֭{ vx ihQ# ̮ZH*]k"+͊D8A% Q~B~P|.{06웥HQvln1/vdOkTPٲpGP D'!,"23e'j]ZA_c%S._9vC`>4M{NN-A;A+_-C~5ji]tE냔?Huf"[j` Imm~S#1Fj2UwȽ>]ݪ{P΃n ]]qL%Px6n3j;^:DAA::%uwY{g52r^w!|{ |A H8 -| HO.yˋ/Q^99Mk}C]ac2i}^^dF1-~t"E!P wzSc?!W>> CXN"Xwܗ]9cQոI}M!|TItX^z;ӏ5˫ y7^W'ٜڦGoE [i=fWߪ&^\դ0klo^$`"{& 5Qk/%%X .XZYh67I 𝆲Rr_a&zp +3mojz߰X// {=90ǘ[ |ǔ50ř104-o*>U{v*=Rmc{|r΢mېqvXK  r[!Aۺ A7tHJ"iQ{fx}QSLv X?_?=%Ď^&svZ|1XlάtRF]@u=}fY*;~$81&:M5eцu2X_j}U[Wy<$!x0k5f,`v2b=6MVHKDkɔUocP8܇t=Ȇ#%h.G _rw=92n9+ ,LeXmj6K;{f1fO+~dJ>t{mCypiy[Z]670]woT~ˀ.&spȻ觅v= awgǗ8KG^{0=,fO+?A' ?27v6] v|~4ׯ|`|ABz!YG=\#ж>P?s[["v 2'.j[Pix`,f(1xfmvwDx_M2$i hk-K(9q<$Jl=(ي&<H_QLDP}d^3,X.eGe^Id|st@UE L$J2M+!015 c0Q,W M lЁm7bVܽIHaT=yza![;K]A 0{Pj2bMywY*3jMOo>.[b[Ci l&m1G{9x}ëE7[i,e57nEt%f{maeþ1Z_wQ%>0J`y3`(|3UaA}A۵5ۋxm@}ϵP<^Ze |1imo=5rTj`-ŗY[ 9'Tyc2gH,V\fIcfȘ9@#C|H컞e`DƖA#a}ށgdU^|w2='IC7胹jwoV"v?A/@SY‘7Q[8|-Lը,,0ea@2p4*7)p)_;̺ܩ8 ,xg`fT0X+MؗU-ui ;c- F(^FU>fEnf<`Q^]{zn+tc|e3N? z 8NY !HwwZfwp`-]{[::ї~ od,!TS4v.fH+'` EĨ9ǒd۫$Wy/IdɴbnK⠕55;!gjg3?]~+\ך@Rz0+LcfكSieME<8`FjtzVR_6 O-vQx#9'>TTYM&d{1Lo 00!Lj"1e+DBQ08H!9 V*8 $K~U3)ho~>4or7ot-=QO5>Tn:H0:t^i Ke+VoC0*I]do4' lrQ ,!ښ%wnZN{R)h4x A?&^T8`;# O3)Gkf"z{`9[z^m6aLnycgvؙ]=vfWW icFH!* qH S!U lBx89fAay]]s H\V}PPGK5aG=!C3*>tLJCl<p_[09[iqL#v~b['ac;[4X!eR=_Cw_~> sd8G FݻF߯jz?}6`6ǗE x-_ieIxk-FM]x `oEIdsv-M [onn_n^EF\I;RXlK`QȲ"TJ?eZ6PBF`z{w7`ԍtq}6\M}yǬ;h/Cw;jE . 37]랧XޛF^;jqP8km]u c۱ot] "=l\5zkn{|)WrWj`P၉9kM 0.ՇR;+W_;z\=O\1 Gޞ #b@"s_\/:l$Ff4wTÓ>d6NFU5pĎQM}5}KLtdy'@OO̿m_eBheXd qٓlXB\zV .dZC5L h[){̍}A$O_沘Ҏѻv4؇Fml2OAvmne'.ePp/17NwuO_!y8ݸ͓w+l=ap'2 >4uʥeC֒$씵?mE|_.ͷs>9ӭ|3ʿ칅k #˻sé>\Y;)5MM/0Oщos]ޘ9jaQtx/%DGE+o&(ݗMɨ;6/'i'Vۧ۬"K 6ycu c|imex fuTH][o~''_4ZqkZ^GlnEh(ݽ٣yf4wJϼ}U˾e_ղjWZU-}U˾e_ղ}c%c*sa!ugQW<qN þCR #_JdB`"N(")iib*F8FiD@d282<$\Q'%\u/:'h@؜wvXN9/J&A$,І%wH6T!J#L0Z@cETPm@%n7Xɧgo~:3Hկƹd ~/9sydtLmy"5gRZZ&@s/*]Z>j{;q($3&WK_QLD*(5)2{VoRZQ[&Z-b_eX6%VXsN]n[ sn_':wQXQ51x^M-߿PI%0*I|EC m  BnzX-MآB}+-—Zӑ \-#0I gi-Lo~M_~geZC~OV9˜Qj+VyxW:ic2fEN1P*q DGxpӡmE Z>TWRa?HS:PTTf|C7'0e6b%IM4q0lR$ M@ 3s`"0 HRV1V9K3 wTwagMx "pE {(!b:|0tLÐktvfr UWT(MH#&фX( y$T d1 $A&$T(dD$Pa99%pr='ݳe-xַs'gicןs-g[ѱ6 _!HL0*RJ_9~pm7DWRHI,>W5*5f%C1} &f"Sozw! "lM//G9cb@,xIim iWU.p wQww>*56H:s>t.߲ʹL #a͋–hNlv)rՠM XޮB$ !9}!ķ25?go\wmJޫJ_||s.vU6 _e' BGɏ _`'>|1rA?QIu9ZrTA| DߣbW6ŭ2/*^AgԁMbπ-Og/zq]AQ®-֏'~̦mS鰅ۃ핡bJB!$Pɍ`Zr)eJ<1ELDhKt KӷF\#nAѵa_86F峚nbJI) 78\czO`Sؓ:622+f"DSl{`[!{n@+(O8?D|ʄ<-kںyAڧ1Ҙ%\q$EDZ*%$c%$1 v"SapXQ 7'zp$?ئ;r{=x +\> qwtt87ţkg6_{gglx,.u yqT`醜 AƖ\sʼnP˜' C2QXXQ&jt9wmՌ K?Ǎͦ7]әUej/t@Ȃ>D)L $tHUH8J-\$ CEh:e ,Ԋ3Dd]yo#7*@} 0xN/&yA&6𴵖%G8_]VKݒ-ⱻ$]ǯ*:$0 rt-0!\0Gdd䎋%XcZadPg($^2#YobA% 9Ij$f.eZ2eN;$4)AChKӪ: ށ/Ն ޼y*x}KMr m T:"_0Ta0/gru~xQn^nxh< HMӸA1]\$aY顺vmo8O/}o7I?mEAtC>^`Jݴ#~s/cΛy iacS1<ꮄW/adY7ϒo^tS Ai)Vi8]d@`nb-3E=qf<_;šsͲWOX) P;ZHz_ՅItפR}{0=&ّB%n ,垓[sgIoq[ 8\*n Vtr'dX-3RݽCdGm lK&T7<갼&eV{G@Ѽ3hc޺2DUAI=햁u\Xvqo*-)4.IF_E_5:4ێ6&ypۑ+˗q9Ȅh>ѭ"~#װgI7v3Pj!Da|gQ%tms!"dQB+/ elm>\3})8FÌcsN-QWY$ }ylspzE8܇Z>b%@fxdi"R'o$Ƨ ,T{.Xxϱ\U,H`u+p0!S&aJ;f2Rʃp-A[/5^#~g07%Nۮ Uua6 TM3TkYq6v-Op}{y!Rg[>GEYVz43P.VCQDDP:DHiR48a6.42!@.jOw@|BIJ@&P$`XyGDJNcRɜRafXTqXG$d'>ݗK½Pپd*Y!J$M4Zh !RRx-^LOBۡ~,մ#aRj8PJkJ<`85C<3RV(p*|Sԍ.)AkϹ%=wҺaឝrI~k.yߒG"L{e 8$9.*kUjarc\䃙U˥SVĈMӣ[3&{$9A,0@tv!33k;=LA^.{ͿJ`PVtb_5XR"Z.*LW6黏tx1QQNm:?gzdUݘRԫ+DoߞOoΗ/i@WgnV0dWc%Bg+.S*)z+TU{,uTw+o|=^/o|2Kjm^풣HmlZV}i;PHX摮ۆ!T7L,}J+V1ipp,d1f?rLJQl] VhAI 9FcM@~^rX3UwXꔳ- T\9>|7}~~?_#HǏ'Xipft|~yz hŻkĂCK溦$(J8ӟ|ɞqi&G"`%Ik{yfd] OzMѸoQCq s׾! 0À@_x^꫽l ޸h&Wz}%8Ub14 fcﶿ@ vw?m92K?+à z#(wR!3F'm!EL`ڃa#1.hRBeDdD(іI,pH"٨;wE`i4` MN6bCz뾎hb1tȝ ļ zVTtYvrá^Ct ڀs8p(.Ee3 f" z<#!!˕$x=p~'QjUTR<1.)p|e)\V;5]w~1ɭS7f' ԺZwxxy@ε]vyѸMK-x4_sw1ƭg U0Y}jzrN7M%\u'cN݉ա(lo6ąPwn7k8/\]4v`Six`,(U0xa23Ń8Wdk]3!^{c$lW792`k U cJ#HmojRz\reXem]WMaszlKц`@ϗ`'N]R>ͤCs?VN9 3xvݷopyOsߴNh뵯@%l`s/G7`:|]u?wx ݡvŠ 6> `ʚoZ|S($C K)FDoA;s:aBqM<]ś;zƛ"+jx7\ZMZ[sndh|}l6!WI/ːJӕz3%+eMUˤbEn0P~~ dn=VG@ &c͠P4;kH)EВ)E 0óّm{fz6 >¶)CσCRHEv %5sҌxZOiIa`4qC~xcӢ Jтd k,)PV8d1ON'axS1sL(QmGsv*udXDAsGkZ )$1 b -;KuuzϾ~">Q@f1fLO!rIL Jh Obw1p]m$S#{eVƄ{Gn&Z/y`0>1?ײ@h#ˆiy8 4 d.7EX(nq]\+qE٢X0taE$XLjAeP@]:y<+2Up;o8f = ՞<#,RFXp4 Bw)3:i{X_g)Pe~rzٖ>#$&$s$Vc%%ا `z,*?VzMݛ djOp;30pp=>nJ|?|~i3ã;7-Gm7KT4L˙ DafiPbY K529}%y^o \(A \HZłNH A4y,_5:-t3INjaxKJ׫:[͇j* LgᤒoaUW/AϋL/g=h6O5b7y) iVplA)J[8#AXjg8ZBwtmd㋱5â`YF,B0#BXYL^jʈV1h#2&"YRgt?:&7=m5nMa5M/6K؏ ſ,+ow2RLSNq飶,PbdF/cL_ܵ'wҙԨ=Mn'v.o^AaA22 R42aBD%$@tY$) Ӂi]IՅ X=ZqXBg 1(S5;$0 rt-0!\0Gd37'XcZadPg($^2#YobA% 9Ij$f.Z2eN;$4)G64\"jUu5m+9oog dd[Oå('73P)vD``_HόG+N'ju"ݼ}ʩx4qb0Hr-d;Cup4^Lo~4a1q׿Ic|}Ywi=_7%.Ʀbw:}]^4,ɲ7o%lꦴq33<Ӳ wӡ 0wZ%ւۻ)P|Y̙833ff̧c5:sh _?1ce82S,C'bj!}Wj']J6 BôdGڶ xnt QLF*L{NRn%&o p٫ _-3XB쟸db>׷̨V'Kw͆}9ܛ65NPS[&DΠߏySʤ"UL&80[jq=`Ži[$@'}=}+o;](;P^1oG|/_F VrҀS;v==X5b,!"YVz43P.7VCQDDP:DH0N ]ioG+ } Y5#&eыYY:.VFEFx/PYZj%Aby;U &^I9[sFD+GU%1UȤQP6-Yuk~}l>v}[&e͓w_,YNm_+'#)e+*lQR^J5;-EgQjU{ /IzUI$)ЎW) Q3k]QbABZtѾEL(6eF1Mx«x|}X>1>6}XǓ`SdR>;,i\Mqi?+۴(bE9jd4{ereV$?U]Ԓa~̩eGZnp"rgޫ)MiAoFW( HjbZKvy>/Glw~<-agOoEaBddj\%|1.ޠݙqM'o[I3;?M9<Ѥ>j]9s$K/VZޯC'GO/~((!кmM[]&WH!PfNQ+Yhs*E7Kϴz|n7//,?xq]7}*ژ#0ۃou8텁"䓷/xva<#!Z%27t׌oFf.k$b.iQ`b6GMGU%6zȾVUI/#i*GZ,[΁_u]9.&X 7^ljs$U*qHHIՏrQGhuGC/ap{G숞vz]y쥣A^ϒ~0!o.}>7:3uDEcw}is= ,rd6l4o[ P2QtIjY*+r5 8EUGQGGVHӥ_Wh A*,psaITZKEԕa$D;qjIeSucFK#ba4,rmvyALb};SQOq8C9Yur8rC@:|f JLLaG M;+;`=TiV{|;'Yu ͯapJ_FFe_={{w϶\YooᎼ?7riu;<+79]Smtofvw0oכۋkw>x];_rs<̦ [}{eo[Rz.o~ˍ3bO|0f.vl+Ѷ[IG9z8[/:(om#_~{mj),7hɩ^h?ٓŶvɳ'<}Z8 YY>}]8:kW~V GiYfeP}'..x2{O[8|7|X W}./T]|nnUZPoN`t\{_u?SDڽgVߎoϜ&6ы_^t|Tq#\{uYlml~G-<^KC+Ld \5s~(pk _;\5+{oHjW` \\%Cf4_;\5+}op+l \5s݃aWZvjVGvMՏW^)SJnmEin5ߍ~'mXd [Jrӻ$]ruOڐnǧ磳 q~p9>pg~2]DuV]a.̺G[_ |6>_!䬧uEmy~yۛ+ -T^*yi{U ˨6k=Z<͈#W=uD0nwX_Ey 9rc.W;' -j~]H܌?_5?+-9b7{.bdyﻯ="_}bHuf#@VXdV&k,m _iNY6訽/j5&娴u)!dit2I\! 'ZΗ^|W5HftcUJD38f5FǤj$%JPSdegg0ᖈf >ev2{Yf!*!d-`T!MJ#TFtJXU@o1Yc*hdќkPS~%|-<2`bwn xr*Yx9Iv +:DDymW"clDa{5AJ҉+Y9q $mQuZ AFVB&=L}dCa Tx-AJK.ؗ`{F+}BkF$ii fHcزv'i.&$c/Si'*lU\f Č.:(sԳRNKSEBW Kn L!;ˈoh\ XoRi,"@XT(`t[\ۡQj[Q3oݠaD)lF+јY4\L@3#[U`S3QAѠ1. ŐZuvj?cSO0i%So9aҰ1kQ s1 o6`*!O޸\Q3 EJSiĕUJ}ݐV [Id-tA*D$ #IPe!mL((p dX{ E1{vp:i*-EL+|mAE1IR{He &e&fP57._R̨20[rHc5d6HC%i+UKdIFySU5Fx"g#v;‡r ` JD\J8()ؙ!O:JAʂA[**UZԦ ڛOpg4BQ.(ʨqB  Y(l NBR uJN]وReFTe] % ޗ%e NFT0̋ ZN)[%JL p X-@@It(*(_C XKqڈ~"30X4֮"F*' #Dj$ۑY 3"v3Kr‘ `Ö82QI:Pttm5-*h`j1xL>55:".%ȃ#JP$bDMNH2GE `Z ƒC m@xDAYd=حN( 2XpGW&cܠLt+QϓW% VwFR8-P9%.+$ƪb|=Yp-,+lE]Q("VR;,\%A' Wl#7(z*@=ql3J5Y1 &FB,W=guC sOs6" D:Bcsjg7@̍Z4XV=P5'/Uis&I&2RDjJh-ZGv߳VKaǠ|Uڐ?샩&L6A9%^u  MUrL:CZ8Ee=`%6/$N F9@٫8A=3((q˶ %g$)_8^#\( mKύ4f9sLX+uå>4`ebBuf5ؤQ1%\E! K]ผP5.AΛT>#t-"`BoH0P5KBcZ{ȱ_fH|?C&нt0n[cVI=@,ʶd[qbUE{x.Xr-n9[;9iSI@]aC BD[̧;|~t*-W% 똃κB z]fKRt(cϯmIikƤQ/7s᾵q{k+Ps7doF>[=*to\n\j%=S\:kJck"8Pz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=Pz(C Po= J. `;z(0Wڧ>+t z(RP{y_d~^ j9X"]Np 5EƎ }o}:n~wׇIE c!+W Lۧ}.z8gTL}0"y q8Uǯ/`RXEGYRNBV_V> Wᷙ\mKixlsIT߾AK˲H,Bp (1 Ku7^Jy㸸lⱔ2 },\V("R "LтrF8^~Z\o Mܾ,."+b\ 7Y5JQ2iLd%(F4JQ2iLd%(F4JQ2iLd%(F4JQ2iLd%(F4JQ2iLd%(F4JQ2iLd%(oɴXuI2 =de3̢v_2 )e(#$Xz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=[s0uI#CztFz(-E=顀1C PB=Pz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=Pz(C PB=PgzhsglD-1 n^ cw)yp[0~Q˻#k%y͹8ʷAuU6ؒ+{:\e+ &W1tJ0N \esMg }/UR`2p%\u$ӄU6;*[̮U"\#\)NT;,mw\ WZSJ>•CpS;W`.j>5 laض ̵uzlek-[[U6Xtg=۝eݟVf#\u=^]=\a2p0+wm?8c n\ϱu[pͥ+pl*[)-y \esYgUV8&G #K!T?>x0ڊU(9W 'F^_V> W٦s_mNc{K񇷯lВ,j) o s2Z&{/Pq2l3zYLQ㺋HE*'(ei:zYqĥ'ƥ+|dΪº Fu)x& ]hFt&s)|q]hYmh(QӴ٦ff0Zcq|23N167s7':~Ttvˣ&8w><jZ0gSA.}:TvE C ?^x?wuҕ/|@7'u4Oa23=k E1ׂL'p-/h{q4 Uzޤjrozґ:N4Gj7c[iʻ3),~GNK1z"`,PG/ Kk"&OF+KCd4ycZ-.evࣰ֯,^A.uԯ5ӻl7d0Xٹ 4C $Ī7r @=a*Bڻ|XFAFpgOsuZ UC?Wv۵lg4a0m.vnv74}It=\9p09x?ōO~uCa<[cl:}tϟfo_]c0*Լ8ToQ| g!|Tmv^ohtj3> >I!muKm"I}0}Wcum瑺@\$FZ=|x>=Hhh "D锨#,Jrk) q q|g#ziy⣂.?3]a_fk3{V31K[gbӞSQQhV)<2$)uYx#ѕĹU4Ѱ  i2O^Ҿ=AWi*gMrlKIyn] $IY꤭M$BzF<-'V24= d7aDl@Į;l`1X5znm: +_U&Lq/V_k%󻸤{kf_U'0׫Tsongg۟>LV%paf .Pe(S.u%Ho rxK-xNSy \-1նs@k1}adL1K H:]W%,8OeHEL]f?V|؇a>1zyHlŵO~1O3  -S%mp&lVH U2$uJ}nJ*=I2FNX/Jg)F"V[mi 9ws~ӝ?ް\߫-f\Q@+'AN0;@6qߚ1Q$5ͦJOM9s9ϫS.+Tm#kx (ƊBhIK3Mo}uuz»d69i Ts.UOmm[EjϓqN;[AȺI5۵մbհ(h܊y$VyC:gm'ZCuݳ2jғ6!c$W,}15I zypBP>yZX^ _~ч?!_݇OGGӏ` 8j Jol¯ܨ|njo^57z+I-#I~14ϓwU_!||~xvN\HdLB2|U_ϦҰ~"5싙shkzKgqD`ݼ{r>E=rl^lNÔzdol `818'oo5?ZbI q2(x +1K->i8<;Xϔ;#(l|zQjO9MD#, өd2Ù-*KX #aфvqS@9~ȗP\@f& #‰$qpx N&Bt Fn`'P[y U|C];n=tbdk>qebsZD\vyJĢG-쫨2' פ/J,՚EP:Zq9PTHPgΟ@ӽ.σgê8=6`H # * H"o[JBeO |2g\2/F0-߁fyL|XDL2D8=.J9dN <1%b\ZSbOLu7۶in ĪnRޙObm^o-2tVEKAbŅ3JɔMee2՚1$+$O6bi%aD/F8[\묧eM~~N9$rId{5 _g(q[& km(q˼l03Wwˬ"MkherL/{-O-5緺#|!oz_usc+R}~;=0K̶6oŌKk_M^.1LSjc ޕBD Dy ԥpVJLm-_]E$tRr+{ -k;MG Kc(Jpi͸*e(&oÏcSs./kqLvO0c׋]<3羸ןfKM)S,IeS 8z!ْALJUQ*K[ egt-,6M^lfB^V_~67n0~1bt$Ó 3Suw \iDDcY.xmrgzɑ_iw1},y`w鞗`XJU RrJVRCl% dq [Oa iU3``L1hL> 'H !A=r@4lƼY:uV!!: ĤH[XA`Ir,aKWwc-YӲeM;ZΠkGh*:蒠_]j}+9ooH ֣ D7II7II|W4IINJzIItk2.́?+Z!vPBP|RP5@@4[]/3&TNAӸQ1'[_ӗ@gCW~0kPKpx}}N·s6ǫ׮re.Pp:_}vf8ԓhOv2ӏOɮQaX&U ,Rg2=AZRRA(Ŝ(a:Iڔ |VMtiRs];յL%vW]"xZ/ѨKK{/rzy=zY﬙[43+fV)g5E\XR|R̠r:Qu2^]dJ uI6~`#vqIBoE9qaQPnc <#6ڳ%γ iwzA;:Ypfj^!QmN:˜ $P]"AhĖK{WA58r1M";ǑEKs\-Uj$D$64C (Ey|Ik7㾓kh|7vh^ѮMG`,/Cȥ gr<-_sԳ|O+g޷kzG6\6YЬ{ݻο'u#EfG|#!sG|g܂<Ѳ0#P0+h:pν9- K Ys9?ԥb,Gw>dNIt\Yy"k4<0 m*CA ƙAR/9яwSlJa^iGjd~sVk/;^sXe3;޽G;~ӱ2V*6\`BVfzIcyz{iE}iAѾ>eo"g,O -,% sMXR aRpY_- r Gtw`Ż2k[[O~~hVy>y1d3Xƺ*EǝiL@!k^%-X܏\yڸǢ<ҕ\0sϦ.vp-" `BK 1Vd[]lb**31 n̓4opASdkJt$RX\.O(Mޔr6ȹQjEw9lVJHH!ǧߒ*Mg,tNI.֯գ"9\'>6gjL3a0w^($y8tGՍT\ʃp55)h"[ZsSGB }~ -HY <ߴ$HPZ:S x.֝>4rlRGC7‭iU"bK(F[ :aFh4v&.8yy5ڲе.ŢӗdVvOk/7]Zh? ?"9Xhǫ>-!̰h~xݟN9iB-!5iܡwB{Fc:Uwod(LОHe E}wEer`<٥=$3[\C׸=zgm"g3^vc+t@HmojRz\Tknю\0{> ߆0An%Ou7~spylKFZӷa0jOCP.`2 wy)z;zχiR py3SHa@1gz4‚Sg* z;> J+NMZITgm.Ekϯ}d^XĶ21y[LWLy,+_X~3*xXPYX}YK1u0Vhda(pm1y)'w'il+- Z1 ‚NG&LXT'/0ixYQ;4I{˞O4I)i#US:HFsʸVo/"1P)ED``_HόGEr RDzO RvQ$JVOFhrl\k\{o>L_e ]1?ܯA~,%w:rLK*Ljd<)Vה1тFQ4`pH,icz/hh[V eݱROڢ=?M;l-ڗe2-]4,M֛,Yydt!xQk/%%T -XZYh6K 8xZQ~)3XVR-( Ӣ~F_Z3~){eˊgܢY1b^N17<묡-zŠ⃕ua(ěnZ֫ L>2]/ LY.0.t]m(mrׄrKm~yY|ק,| 1_;$IڡظSzt9Kbmf]"AhĖK{WA58r1M";ǑEKs\-Uj$D$64C (Ey|Ik7㾓kh|iG;WhW}e#0oc!Bb}L7̾Y,A|8 2ѳ(}2 6/ww gz_cvUWxbzj9dJV |b  %(B di"Rvo$Ƨk$ X*\Dc#Du+0!S&J;f2Rʃp-[/5^#~Pw[keuzwXece8I2t*̅_QeFm9r;9-òVy㻪r̲Sb,A>NLS%8a6.42!@.j/\Y9)jLHùŁB]`)KݻK4)/ Ҍ'<|23j}ݒ%c 4>!p!6AQyvw̐DX$:IfH4sR1nL0N`DX  H {AUc(жĈl?% N!\THQ*a\J 2aVrY"2҄' .bJ\T';mJokjZe aĉP>1!iJ-)BTBKW܁(ϭé|By溤[l5lNG?T*?z#]-$!_79}Xg4I.Tc_^G}0oex'8W,z!ja3²1vQAE?[!{6zY\*A[vrQJDcIy߿05Azਝ/\cRΖT*@&>[;G ??{՞~W@;Way 臜;H$|XDA{v%/iho4l9$#4xu3k@Q%z;_s>}1 q{_#"yx@}(>7p;^4! . 1E("l5, z@`_xJ.li4q{ g1Sq۹!/R C0ly2E,Z{XJ?Kͭf(3H IIIyx=[޴۝;:$Le`o``a}yDD,̦Eeݶwd 0G=c#PS+$H2ʑ`Qx AsekL\`0s#PvjEEۦ" 筫:FJ)Z-¦"Vs薭$F]+Yu OGE,|&nbcx`;Lv95w;( U~Dͦ6G0dMixwzm݊^ (Ik&Y̬DǎQTʥΪdU{lslŇ%9j{ $o<`rļ0U_d(+1׌ܥ2JXPJaEVqƔrũHg5NBTJnʬQnwy{+7調_y66Fe7%"i@aCO{Vܱ;0g44$8aX3<70k}U:`Ԋb3a%hW#SPYq 7%) 6 xZMxiFM5_mRyz/uOw_5$7B,K+}*7Tbr1-}sԝ8=Q汛qO]d4Owg?u'P*Y^\Y2s+f)G'(FQ3 0;20%ބ>ֱ|<gsߺ?%qF-Nc4N)ѱ$*^U+66ʌ56 ]:&٥&u~'=#Q.<d{au~<:mȂ}ڨ6>sωE ےwhi2gg.z:nOZ 9{睌 jbE<7p!gUJI1Y1eX, NZ'}Yh!5C@#F˫Mo|Y:lModՋۜ &cnJ0aؤi%1(UYl$[E_Yy#6~ ao·8՚JAͱ4 );ڄfR2dF3Ì?A8ʴBk; v3N_7M qSi+F5K4:3Nx wtcgt{Yź6ƬG%7%b"%Fig* J:2Ŭl.)'b(PʵJ 9%J9c07b%4iH"D]b\ Zk\ .-Ri }Z`pNL·騠yL7{!2 }/}E-c.VZʘ1Jq&M(&ۘ]("𮌞T> ] .̇WwA/%_M9^[diln.sX/;)1_հޥ} `_ {q iP 6;q @.ӌ =~߁w0篨tX mAs(AE )&qCD`0tYP }j$tXmv~`^˅ؚ( c7KlWY`)[ I1X0^~j{i L\߸? \oCv=r Ki$2FL4c,5H ʹPؙ _?;b'i/;EuvzKdA?$)' ^Gqq۫?#oxG_~zYDAXѣ'^4Μd=嚕G}'94]ukv=)|>>_;k^~Y;dcuBhdK[ 4^鴮[qz'ѣIG&5ؼPxX12Mњݽu0\wgq|QJu:1]s5paܭ7<\.ΚCnP9c^h6Cz k;@O_^:uZfڟ_?1z #/oﵾUCwa[u| Pv jFQLϣG5o>nD^Id!"/|MߍaK™ y6=H塷ErQ`ՀݽPd #T=4&+,{LPu52=~[@Y"KoΚS},Qs|/ns~houZ{nn(;੪Of.~o} fC~{So rO!~n}hSC.KB}yгF}2jfgtcPoS1mX}+}қB9?w:ϻ0('oㅌ￷?7P#v\o sn4|ӏ7`_R҆uhyjt&%W1rBM^ϵ(zV&/ ދ18Xx)ho-*im Ŀ"znܦ]ÒDtt`wCBL :w>N[!7َzN~].֛but~2Ɩlu5w/g+K3JA@0dc<m2PnzQieO%2aBbA,7gȽUԕ<,TW)A7I]yu&HlRtՕRWR]1N6H]QWoԪ侫+Ra^8B$QW\1S{Ӗ HV+%Ed+O.iMy*kk](ee=V0>_ {'()HâV^X'(ZoHd'ﳙyGvhfDI-YĮf d}ɑ_T` BqPUIA)EݛbL8tu^V`M@ZX.*Ԁ+o`A:`"< ay? 'm3oYYS.MHU}14{a̝գ0Mn.ޠ 9jQoAT*1ɏܜ٨yvmtU)_W^*g=Ex̅n|TOvQjXuxunz PHXԎ.!P];,|B/y`Z$xUcvTV:dݨus%X)&)4>啙W𯘥s|(DzN9[ѩiY7z9~W_y~~xyߜ|髯^~xΟ`Du yo~R[CwxSCx㡩bC/A'1븉)ҏwW_ם¥7`Xvn,tRde3? @h1tU]-ME7BFԌD) _::$h#]l`#=VE-Sy?ƌe(`.0,YT`#)E>=_ߏ^Q1ivJ)ĸAZDdD(іI,ED.Z#cN}<@XWw BQXTD'  "`Ȩb l0FpĀAr#`v6Hë{o1N9FІn6]n~sMbߝ5Uf:_(Wl^$n.L:%n.ͥPr|:ԅq@5 Ͻ ^<+(H$sY"W<2ʑ\$F=縗RID[~9yz`YaR/1jxd!R&RP(#"&;FQ4`pHnR6wL A纚-WtcIJmr\ Gmf.{L>K1VYE V4w`LhQ9b0XJB,2MRfRf5Λ7ywkGhF crFU^?*?]SU+rg\c'sC)mkm&E7Np6( vzfԥ˛l3 +ZMB))&@ZQGm X^Ȍ ^`A ƜY)r kQ*/i\+n W9/AnoS@ L$J_B:`^e) KEt!"'U<`l@+e \H☁F 3К  D96S>XG M#[ޜZhL+ : DH[XA`IrY$53ckj-R.))lD1FXS8EmT$hォ/=Pr_BIxgN[% D T(rA*73cQ6FOY^jzqD"Yh\7\(-I% AY?LVc0Enu1t;nйۭNF)+M]Kv> /k馥,Yyd {+Jc)Q"!@%K9 Q 5rh-,|& oӡuz;߬3tWsw{"-hD ۼ`Ťz5]Se2[l sn)?2aDDJ'L%. Tz5!v(]V"ic odCҶ hjt xja-[Nn9/3[ƑX(WWe] Vx SJ|̆W@T{fbMmv71,*+oS@U&hocJX)FhF9I-PU*.$}Y3٨ɋgS|:p `*w> \p}߿^2w 8/~PR:=@%dQ,,I)d׿|}v2J,~vTeƒ2?$ & AwSUM3ێh'r4nnH|"t^'ٻѣ/J:?޼/Ҍom6l+F^{YA &`cvӿL@;JQgcarE-O qqtNe93Gy3RzNθfj>BmؚU`{23ӕH_u%7ﺹ#S W6DHfzNO80u:81u͂^y+vmg]z6n">yj|H7~>"Y8Q9:_qU@q̬W:AM7շiyO߿@O"Ei~å8 u@~Cq~0~7h9{aoT!*+1v0*+ȡήYڲîDRTB JaW-#xU]=Av9&lJWN7;Egy<"ǫ?%p+F;6=qq8Y>w} _(<9>bZ<мR 9UW豃{VήFȮ&XR4& J2u(*A+^JP*ղ'ȮfR4 JRq(*Akl?JP΄d[vtbWɶBsF:Kt}gW JѲ/] ^ccW[\Ȯ<JgJmTˮbW `] CaW Z]%()idW î h1ήveW®bW `]%p<v{] ZvXSz@*,L*}(cwv$eWO]qЌ/CyR%/bnϦ'R=P"QX m97GaJJNu/HϷ@׋P*~VRD&wnz>)]NXcMq?&+;X;mxrB[cR;ĬW逰 ڃ~wE%K_"_SQY!v`}k[n.;nTWq]g?g6<@3^پl.wޯNl0N]t"h; oعÊ|.{~U JrݫFH !r2y5yN{J<{SZe2r #9uB0,Mb;ouRSFDD bFQC"eL@z!oN.ޜW@ƮByd˃WzC=o@|^<|o4h' -}w *2?_y:7|o|oomtSާF;iCu{sW ZF?>745ZCo-j^̈[x_6;͆;:b D x9pYſO I*~*e,⊖f37Es;k'e9`>r{jKno;8r;7~= rrR_NdF;6Y̸ʙS8G Det|iTYTLq/)&?g٫U8NՀ-|h XXc3!d#x2hLMǓ)Ԣ.9ѓJ ]NcK<_*0`>A௰} c)ZJH%߆K+ ٜe}SRm~DujvɍQW\7E]EjqR^Q]i$u |cl΋RH%[u#+`e46;Wq Qh7znL7Ƴƛ'pEܹ@{cge"2rk]2+ϨC"R:dwthoHKw5]PmM/xd=tnj?>9z^)e2{Ijw@^?=ed^B4tC wMD)7#X(=!s^`A'9g<`(JPz8o辛ZYJV[ t.鄞K$o8P219~{LߗyyWK/3nN@u7w;f~2 Xˠka߿b"Ӯ]%_?չ?è+r`6Y4,l{~^h,mUM?h/3MXXͯ+0ΝY-2YnrsaQ8ajR1_ne?/1Ė+:]ÚM6\ws˃Meuik 2=J~ҩ0=~T.Y5;66Tp a\DjOG<[mK;tchy==h3kTl-7p##Aj~ ye u.~;c|b٘N 20 akûU l:~Mx72 xMw»5ݽAk&+x&e@6xvc_a⇂}zEاwP80d @},d#"|q9o MvU&=lD}{?p9 Y.LR+¨vCimj |2H^k=# =ꐿ)'zce(۝-}Eĝlu2 '̐ +rgۏD"3?`>\ط/0j7:aT}/%'h.нWmP :͏ ƘeY{Ӭbp16ϳ@<E.Ӓ*E 0÷n^a'6vs]bvtk.fY^&eaqj 5_ ll0N[m6⡚ ͕g9Ml1]`RQJ-2m\Ȣg cr s0g&# k h`lDSlv7 O[ 4qJRW T (A)3,:rUbc2fr,\zE+Pt6UJ9B@7@AcI<)mhN\CfM^Wmk~f#1HU\ ĵӌMu}sj8Xu&Kl(S3K] ț[,ǿY Dلx':~.utON{YAZx{{,Y%ǒt>bRt2i8]3Z`(H،g9r>iEآ0i 0>}I+Dga&:Q98w[mZʝѹcq[aF Uɲr5o XM=~{ų /CG}M<[CPţ0hMPh JgR8>q*y^aQ&ד2̔ɝ`|dx=/qPDs27¹ #Cdk6x=mdPPŒ"ǀ@Ƭ&"F:#q0N{%X*v\pc#rMtDn)WK(qJ[Rk oi\tY1ΐg@1YNh鿸0Hyna3Z-n-R^-o&䨈W=8gX&wX2`I (pZ3:Ű\˽B!C," dvt$x.dyۋN)An# gHiL*"o Mf`n$/( K% *r4ߗo-?%D.1h!'@BN K.Z!ǔ)Mؿw׶[nZS*nvۭ U+H@!ƚ(,6\Z3z Bi!S0TF9>Y3WG o5}߇&J?6pIނ5z#r·tBA~ *7],wcy/`z?'].;@\"xkvnӷ1TP[jko͌39NQЃLi7h5^Kku޻ЈzqId$6́}w6V}jawpS9cRT*'5m\;Q{KЎ\܇Y}oct<\ZH~qpzӊW47h*6 v# M^PvtZ7qEjqd460ǁj<۷-!W9%,Md1vm%C|>]lu@ah(żC?1ikanGE庚ڛ2mE^ecƵڛJ `s"Rnz![$[8E?qP3>Ğ/4|}_'d.VUJiX1UdJȒIC! &Y3^,=HfD&QYݴռѳ wp2(~1IÓ@*B|?A<{ڟxTr/R1_OrSJל{ӌMQߓS;*yVS j3/J:CuwݚɭjxkÓFڏ~^$e˝Ot9g);-G9c󦵯]'7 QHemq¢(RsKX0@+aDiS.H'6>)`E{I@uX$jj<.R&&D/2aSzs?.aqcdN`b_N/&"ͲzDNþ8Wu)eyD ](1z e `@0v*0s\6H4!k^ơFf",[S.+!O[׹2;NY֔dBTuL2< xRS_*G9XP.=hFHœBm")SAk6̆ LM@-&֢猯؉7DP/e3jN#c}G2$ 䠠ZQ. CN RGFr؀Jc@a-~I?cY?@RkkĚ4BZ-U|*#Eb7[pO%&[e]/3r)ėc\dv!ヽLpq!t-Rlĺ@ [Z5@A1x݂32zxsSlhnݭjn۞p*`,xvA_YVZ!l| j[ů#n̼]=9.xpaC f ?ȺyɍmxAA4ZxG f;-荭t*xf[ߞr1W]o8pY G.{K0Dzǩ/=M "N1Dz!Vνs̈́PqvZxM8V1/DMsx0_ĭ.LZ9~ aͳ;*qι^O'^U r%03/ISuZoص uY&3˥[eu .k䦑r<0!1Y"*CFL % #" NsRNbOч{f.f M9E R< tDԫ^HWSӽ{y##Hf#'wf֍NWF(^X(=de˷)]zkOkH]ڜDOXs\h Cx9oXnXxtme0`n,$!%`q1#@/H /V >&AT 'u'fu%vGu!*!UFCk)Tz{?a8 ij o#ډe77}ȋq"rUA,C;ij_U PLsY@qX.id k Gu[rM#.q_{A[kח;j.ܷYn<^4!,BZ# ./6\{Dh/V6tp/BWu+DTCWHW<е&tCAjCW7 ]!Z֝$~CWHW<&tC>%+D֞% ]m"] _ɱ2'lwu7ү%CwպGK?Zi 0\}M~b}u{u8{q[ K/T.\AO;vX*.ު"yRD;Å/"#Z ۾5VQ{!P|wTCOTC+FHʿ$\UA4b stfK0a1Q9jǙ{*3RR-\Mئhg0ͨc{pq+/FF rM+VoeYP9cE*𔎝;ņr.Ҳ4U0&O phUhvNi鎅Jɶp )cB}'DP)IDDI)Q4 B9."_vwc'umvu_,%Y{8xsTPqm{{wgT*xwtxpɧó϶֝mnm2A2OYz&%|~?9\>_6pSb;u߬קPf7o~a߾"3܃oCTv: 8p3 uzxث7}~;T|H.ه$ߘޠݺ!~cZ7E֞5ɥ9>~|Y̱1] ̕lj1i[]G'o*nࣟ'# e2T"{ttODYMP>ڃ~{rךj{Ɔc$ML|ґ<]YF}7T$[g͵Wg6oXP%x\yWnM7jduBd H(UUұUoݎWRzEǪmyz[Sa3>1,k\5Zڬ#\^)D+u_G8͚欉je C>tp ]!CQN8 kCWWu+@kOW ]m ])N>RWԆ\{sDٜgkzB@6]ץ9ם ΉVBW U4= (Qu+LYPB\ԅuqBaCWHW45+]\Nj]!Z;]!@4ttFt ]!܀Յm +*H 0'ܰ6A@_igpN7t t(8uwTx\:cJ QHemq¢(RsKX0"V" ˆ^IQݍt~iygN?xt5 R]Ҥ6XN~?-޷;ȳ<3e`Puk nɂU'?ϧGN;8wvtRd#•1lc%o7$2]`Eۅpۅh "JL7dJɷ ׆.u+Du+D)"36=e|צ=m>䕎ŝ%_0jAW6=jD+•+@+|t(-.QCW/BWTQ0!Ѯ.u+D+T +8'A hWWa] Ѳ+D)hCWHW\r .tht(FD0Pݷ ¦u|H^Vt{Iawe:?N8Ue:]\apn88|DM3'D϶rݍk!@Vbtz1\rP:h?Zis9x^(uê`N ޠxEv-zM-޸]KS:-Tb߸S=3νmGOL$59|rN ȳ ]EzB=#%*ʵXgM#/!Vr)4,W^$".hd8B+h'oZy|k<,*{Zұ󨯔v, H.A[*Ca xF"A-XEHU@e\KR+Y9itV0Tiz"C+=rW=;2C%2՝6Fm*}+]y8KI;wE0!jm!ctZ13D)[!Ƅc=XL/wcuecuP)>'/nZS?lC7TuJi+odℴ?} 5/r{@iWEÃ(]ٳ} =9oO:nm4Ln5yM/ཹy8MNӉt^c I=K-@Rt7N*Hhvd0?1y45&f֫`eR iV6SP]q;(nҴW":Yn/@MDH`*&Y D4 -H*}GONvhYtStIQRj̈́Hc4N `R J"™>uN  `ܙ^B|mF35W28(Uߛ>W`7ub-ȕNѕs0|  H3a)y<`ʝ&2]M#6UׇOz>J6xD]:|<<'cq?}NGBX3=48>\u~R*fٺ&;`Ѥh X9Tj;B8@˕bзU(qQ%p{_/~~ry@B!^] ߥק.h7NA@\06{w;opՖdԱoQWoN0~ >˗o`,5_M?a=Bjè j4>w~Mތ~-؈QP".Cf JT1"8rr88*uޣmFE'ٰR;PO2*qW6 '===PR$/;\ָw|Ww-Egt@յ.;q;!ү ;4Z Nv18+(EξD"Jf>׳wg ú7[msgXXpiׯA0ޔ~>Ȧ(YXjJhm" s>~> H JYf3غ^-W#R낝_j'Px\?hb _yvy} Fy㹅Lʥ^Յƽ| Y%jFCQD<2p#)MMyb(Q,",1)"1G5sKhUz@cKNQ2!`Bb1X php4Fd+>µ91&3EYĬuk'HKbG&M4ƺb:V+Y-,@K`J ;ۺښva$/'űd.(bZjɴM*BI"Yc) ¢R;Fj^KzߺDbc ISVrP$98Jll* *P%t4|Ed"d9ߝb"X^8Hf.\-D7`͹d90`ޗ 2L H Cs|1C>V̗z ^;rxϖ/n#لEMbLe84ubP0jctz!8*U2>-¤f_ɑQ (˦qI0*O""!t9gst5L }]y[GE&2{3#.(:+/HY=*yItv֛^{G!a1A#J ~dP&EhKt]REP&XS:(\_+CEgi|𼮜Dִ0#s> 2;\􋹥bnk" ;49Ch]OE)ƞXպnnԓn Qle̠Eu5O48&J^AuX hAIEI#ćf9/돤)( @9? `xi)7fmp)y~U )aء XZ 8 Y AVZk6Xaj0=vZ f:I#?M,9Ўj-!|[Etl1qݘ85G}F%@҂ЈJhxD$6x6ֹvt*gɵO[uVi]s D4j>j-Hv-6kxs~HEJ{^m.& rts)T2 hi-Sh9UGLY`VSه:ڑMg.׌R;n'yZš'hP\pxv2*eT&.[P",L kaA1E2ܦƢT6dɁn8F\4ffh)5IrbL9C*6(%J7㱎>!%F  űS-n84u0A,%I5ю:N869Qs-0< }ċ=xNM"i{(HjSj=Ŀ|_*'?>Cr0H * L%Q$]dwԚڋП|qwʋ:;SCwÅ+ o;38%kCcbw}xvCd,7^y߸ cU߬ |[w){w+yւwކe[3_>wǙ 1fǘǘ+uǡkwό13[I0OxZ^ޘ֐WێJ!זڶ Iv,˷\ns{#`c.PQ*5TN[ލyK.a%/D9n{ Uq١7*6kDI}}3Wvtr΁IiQyq=mq҂o Q@!Ȅlhh[<=\5YRțSFhQ FgeՌ. 3MN6^\l:U.gFU ;dg7˓d_go@gAA /Z]qȘۘPhB'5D$RƩIWYʲ(UfB@rXj~1w_;,v́5u~sGݭުi*-x2nE\SXKٝrX̶,j[̖{z K"rwK+cW`?[.:|O-`UDIQ20Zl5#Uʩiz.c*c_nkx΁Sn[@;+@r#jE.ɦޗ2?4u*ZEcڻYhֹ47 /QAL~c%@;U,\b5-'F9iVïz-zA}˿]t[5Py HU-M~+P0Н[}E=,wUK&~v01]{wRz”{WylHaIJ`yZ[Ooɇ m _vYeB_0 KqFԶ`Q"Ø c%eH&ELFFPեl2v{1ͫU9DjE2{c NRb"h 4c͑T,8I- SZ㟙.(w𿵃lW<.[SajKL0#w?hL۞ޫ˕= ~WƛکW.,k @F$Φ7"' };Zh Y,%|Y߶/GhiV,o@%3W -.w[c].*3aб (gʊ l_ S8o>+ ֕[~**:]qu 1mQeq`~@-g`8QVQC\xs1!eL!|X@Vx1'vcEHqH⹮4:YlL\ӄflV얧?* o5[|0*lrAȹXGu6w[D;k5qH_%=vx#͖8:qgVGЋTx^;Z'g n `uC :OY_jB[8Ǯ- V#!:Ʊ'U!dpsI1hZTQ*f 6Y0^i.@:ȡO@gY@Ḱ1/AygXO;ndNHNmUÁ| Y;?sl:ٮ[s`VCQZ.,2ߘ>N:Uy$ַ)o:*WBF?s`Ȓ Tb]ٙ[kR]$y}} :8IPL|?Tk2"!0e?#kF j /Rܤu%^lð`Mu%>=yB>|\_w߿mN^gק'7GSKr^_~w;x."۝$ank{Q{޷WEKbo𦁬j8o tv u /2Ti*|0_Im۝zիhЏAPsICc,LGofGyurv|( iWGNYuY[ TO4KEG3ghϳcIe wК~ )\wƖ𱂹Ŧh-YMA-a%˪u]r qPyЬ*lի6V40*/{2C3)ǕYFY gOt0b6D΁aYA/Z]xjG+V\WKS/ xl49:VTZ߆^3F1j[.aMmӮ"۲jq/?Zm6w Qt 4?Zå\[L ׃#|ʂVʗS8hENb|-h]s…~Ue)}'͆z ((G~t /^n\:8I3LI-P7NYJѪxzGKڌ8, ٍwG7'[6^"k ,+6g׿'ny02Hpٮfj\fىō>ek~*?TA/~:Pެǃa,0WQeG)CʽWKkDzT-٪J%`U8",h.tWS-el ]iSG5?Px|c-hV_%2˸w ֿOd _GEY4=m{2Rj4MT_UuD2jn>~~fCpW[u-}kpPO8ޜw=V沚?oē6q-@;3<{<ٶ\09C =ʯ2K@&?%*ZޘLr.q\שax}9&s ٚU/wH|*G` ?)dҟ.*O}=beZ?y4#pPsQЋ㣃A(sa/K悜ZciAg pg4.d.sꆶh#N{؂J4vn~|guZpd` ,e2L2X&d` ,e2L2~1~um[d` ,e2L2X&d` ,e2L2X&d` ,e2L2X&x'x'q91X&kOp)e`)@'xܦǚPRd`}XunWd{UP𪥷CJ=l٠w"s#r wۛ+L[T4u]%Q/K@|0>WfY[sϭ=W+h+kmo>w)  dzԖrHG$q\1/vbG -\gb廱LT8$qȵ!|wƄ#Ij8p]4|gUtTpˇvwׅzO  ,7rvuۊgcfwZٜ@!ߞ3-=C|G]޺0 S0[HGe,K,c' i ۃ}b;\Å;P*Y"yT(-qC'v\}:8" ALp޹p"/JʐWjZ^!FpJkK}zBzvԕ2őîGw@;a(UiQ3]I^I%NOsVk{m/Y{c3VrV0dWX5B^ Bd=Ӓ34NekT4GUTCa+^Eya.} )&#`:eҍ-ƸԺow.pD#q1{μaa7w:Bd7< 0v^Uc^sGQ7wcJ*NֈXڡ9́H%o-[cSoPttۨ85֓~+O`` Cut<lgeޯ-%cmajhg=^ZK^ 3fxۍR͢Yn(GHW ~T1ύgl̔-tRɧ:e!0dk[UeXo">!J[UƢjk4<,V8XFRuB'#}ıg6o ;窣f4{hi 9y/J8h4]FTK08Ql1RXRmʸf+{M<rDwHgFM UbM(hUkp&xja(k`Qczw4"4R~ٻ6v$+/{1Mo:bxl˔Z}:b}ȢD,!2fD>N-iXlxӎ)Tt!'] Ja3Z|֬]Нn%Ӷlhu7i>0L=w_f,HGVY;< /P˄ j(ƪ=7.{ oˠտ_/A(P%s/t-<_LHM,LIQZja)lƽŸZy1ɔS,x*m&f^C6<7mVLȓ2@(f9mȎ  G ^m..9a|q&@+/uZ){ lDnaxLsLv( :^-5 X+=KSQ3L#LڮfDUPd [&T/s2x GSU9[IoR]AC\P7$`Uf3昜* .FU`% B%ӤY-R_5Ȇ9Ө6@"KN\s(J[2JB ")a1.wzB܂Nd\,+DP4WL%Hwl0`P&$< qSqSX+#8*(CoxHSo)è|2," L ˥t/g !./5ȳBnTzCJC+d2a̷!0=! yX&PWMU(`GyRuk"N E1] j0 ?%RTQ6%*)ЙVNecD6-PoLWn_iJ.L:f )n7)j(jV1aH(+: TEVHLD*ХU㹻.%e L`o(@HWEIa8()l,@x@Ht! /\9$T댉00 dPg:c?b="nQv2mŬ*$ZŲԠ܉ u@=$s`oye*(2UWҥ`ruA"ܪJ 7Q8"'iaE :.Q(J\ DL(2Ӳj  僕1*2VYQ8{  ":t@25#&@܊`mmGEUg"Yԏ/XA}( Ӭ(a`dNI.+IHl'ҧ>nO{w!O‚.&jH \{#@ |tm^O?k"!ɛq/룫#Dꊺ=LFQ{@Xk6y*Zk1, 9@g@!t9N hs^wŌ: hX%Ya֨4j}b:JDڢ pd&"N 0,PMF/v )زH>Ktk#kNMA.mf0M`U#42Ko^5JR\WoĽQEXY' k[ 3C]$/mI޲C:[j:8-COϫs@_OReĠ1cRUId]yN=P T'r9ڟDHvL\ eqxGɷs 6Z<'-F]SՋdT&Azqu{7>>珺]jtmQ웆sSt׿-3TNzR?K{yiuwss_hw]>r3"Щg:|0l޲Oi˧'y!}a"4a|e#` \J|1/(;v2wǜG|t/"ƌwuv{lrqԝE}{S}9glgvJs09~Xӗ?No8ܣ83_ѱ6YoXiZwn̮Wm+KIz~|60,ɸAv3Am8,}>n@`m-] w9ôD:_p'g}XZ.//9"lk w7#<參|#}"U/Á/Z C[Xo?{G;5;0q2x-z^z=P@z^z=P@z^z=P@z^z=P@z^z=P@z^z=Pp,rN5._~ _:Х#\i۠PT /lf*]kХosC;]/erUn>J=Q|مxf:xlHtspeur[O) \-Y2F#ccJ?2O~9H6WGHv<(ڊԪI7܆2?~[bT;++iw;՜Ɵ2hRvsCa䧛{6G޸=ܡo}8h'>;YB\Io/b."b.BZξ"hdIMQEg=i2X5F4dI+ySȃ͒ N.dВ4bW_(+,nKŵ>UOm_j:1*y::m'ԋn6AFnu[#N[{=1chel?98"g$rF"g$rF"g$rF"g$rF"g$rF"g$rF"g$rF"g$rF"g$rF"g$rF"gXa^ia7qbꄟ`5 k2lǖ*];fbN[Lg>0 ׁKT$vYt%Abbc ?..c~ HԔp0+8D7Ń~$ѥ45 8F5<:A e & ¼&BP5;Ljn,Ij 3ꄷ{w$'GA7O^ٵǶ:py ΞY=]@ij .n2S`=~[v&IM|:wڛX CzǻN`v49Zo`]T9U,A>/M)Z8a6.42!@.jOw@h|EnĹ1 TxGDJNcduˎ̰4N+ !ʓH&N < kB0n sN1xncG)2BV(b4*!(};gnR1ՊLX+.` 3PNАk}jJ/fVếO~]4|XgwwB+m8E=T/v%P6fV?\bpGaC%,:[J(m-]u5CUe ч񤚣`(|2\9ζuV 꼓]v`THje2.)a~`RoܙRmTNVW6׏nx}_˧>|/?__>|ˏaJ:AGG #~yiho4UluK [5BQqͷct~D00 ,W:8~_Aja9pM]qQ/Bh/vݕa@+.H^>h[3%W(!EL`ǎY0őq4Zs%ڲD `"<;JOq.Ee>c R {}xe舵JČAH0)(<-1D4 ? ;lavsHo7X 6=C l-pgǶC~yQ߃rෲsAt$mt:%જbԚbDߴrelsw•ϛKy^AM/0)V DEd#m2C :2=r 8pÒO4<ˉ'QʷᗦwǶh{ 1/.rS|ɎTӘG"h 8V4wLhQ9b0XJB,2}Rd-j?N&[<;YM|dEm12't;0̩8ہIzoNR|;k!%4:FOa|fT@bܳBUa#'0Vhda(6eaCU9T32.Ap>n<4yw׷MXTSbttw!>/65l|_8eCT9@\/ c|!8 AHRVa S띱Vc&ye4zlb: e4BZ"z mg#fItaC}qwPrU&CRލk.(qCvYPiI( [)r}q2C`{dzmבuVúpPY=UU6vԔ|I9Z WPSJQmd>ZeB))&`%d飶ěU`̙`KYui] Rf7)mH*p;Ѷ-]>uŒF=@ $LHx %i^B:`^e`% ]I{y /"88fa1 PA1 ('` 2GeN4'XdcZadPg($^2!YobA% 9Ij$f.HއuGeK˖vHK;i ;Q,v^;ι-g᪚ ;q&-)oJI[a?˸XX?+mF_^PBP|RP5.׽Lyɳ3H7*F4Wn$W&~|0Lg~0<3(ӏlm俸A·s6Icl n:m?;s}9VY{ZU<> fe.v(Kp3 X{))`JnXb΂wDðq`mi 𝎍ox0~TP]{miA$cM[:5џbTf<w̆ sóaWoٰQlԸV<[YYNR>Th-qD tqC_<ĦxJ Khx7ޛ ʝ*_0^qwZ^X4F;N2;v?;ܦԌqs~gsjj^5vL~7 .wRZA/kܙ/m_RMĢ1wrC* jv s|!ї2 [ӬVHQ{O{FƠ f]*z:e9:XW@1IL_YpoEJ2( Ei88_E7'y^cN޹~) &hpP޶XaAl i$( :|Hec=T"/wtׅ*+8Ԅ_mqRDa XT8dFeŒDy7Gg?Џ< zq*󄰠1 $h{ZizjhDܷ4kjRO'BvDM{9+%cvLojJNLrqyz)`o5ӑ`v],bRc"r0=$:Zr29nS-}?|=_p6A?L3wĎK7Ko{SZնٓח;}Xޛcݱ10;hUd6zv({꛴n'ܚN{rFzk9^{"i֫4`o9?cf`abV?~nr$Pdp`Itbm^._6Zl 9R+޲Qrq.'cyNˢ]ֹ]@}~.\_/SnV% O&v@nlb|H61>1%jK~odVlZ|¬ߒ8ͣ=p'L/ۗŻ_>=;x_7(s:J\lmF9^ٽ?=z?=Eҭ•ȷ?{ e'P/ԋ9DgH\/Qb{J% J)P!#BM(5S1Ès'a.ò%#!Az5 m_w8b+8" bZ~;n?6">ųePbM)Qy#W2+\Qf_8 rU ٿ&fBzgCqmx"FF)C9 倫JSʧpJ9!&+v0ʃС+ή<*jdWQ&++.>v%rٕ5)+bW`)]AC7bWZ)])\]=Ev)BIF`'(2{rFH\0q':A)Jh]ԤY?ɒ̏DHIuquU#jA~jd]4ں[0B ]CaW-5sݜ]L]=Av1}l]Eî<îWȪ.Uvɧ郹ƕjӜİdˊEוd2| N3bZm痹Z3ׁUi8Kd,58VRpYDHbNbV,"|kUU,]`%V.#j=YéW7U <Z]q>\\$k6cmŻ{zXSCU*z\$#oq)mB2vFɄMTp2IrĉFQo9PגUA6uak /mwަfI?oNW"IB}7TcL i̩XYbI`U<1.3g) QX¸&:f L' KAۄȀ۠;ޜhLIE\PPP D8HH'uSA@#mXJ%4(}!4t7ת:#+NFjNGOH GL:_"'hNuNje!`m0؝YE;`&xYC7 % AT8@7%RHY!f@Js1PIDuh۠/%_c|~o3d|w{H Z,C=D6,X0t?Se792lsƲn(Nx`;ӂ^;^J"ujs;@h $m`%D1^28KƝVx>Guv\3+f)ŵ^V9MV>O]-_s~{5KxKxٖoefYHsQDI 9ḓxD b q_E<2uxȗfj?ޠ 65Xp`$1TH,jHEQR $"Bp0Or|/pcP\̋Xc2O@1ŘIsŠ9Mc\#YA*%Vpi,JbDp"D>z%!$ /.RJR_P(Mnց&f1,qNᠵEh#xCH%LWwvW~"D+EՄ8 rP$Ts#|UhaY}>X0iw"2b@IYR~^y\w'K2 CA4#wtRɜYѠ W)Dc0cLwQc!PNɥ bNts{ ޜ wGNSĨiUk2a"!] \{ 6۴{yuY] ?V󺧭E$CNm#)V VYW_ƉZMtQQ#BT+h ܟ#W>bJeC`jNE o|3s׽߼^< g]``O#Ts?I}y-Q㵝puߌ<#^]n}Fn$kGlH7uafY!Ze< VЯn\t;etApCYL.^DmEobQ3Obaq͍Xb=E/\/ecEu iZXPH>qC)X;,XDPIP}:$`YI@+cQJq(Kq; l % AQB("IF$ќi 0{al`.iDZٰ5r6DK蔫ΉmsZ=)<|xKl+!Yc#Ei[m.\-q)}]e6 < f4W+9},AyxkVܨmWtoA7yc ~~t  uU)7NO7T] ]俏TFog9-*tW[I!P'YlT#BCw-Lmh&e-='sR' nM5c;!!,f aOWZRwkխK g0`l֤bMO&umܾkkͭ/ɘo9$-zg՘IzeiQ(?Yc P>̋g yxM=/ԇS"$xKz:|R5 ,Zg2O'F[,کy"ڹ};[Y{UDs<N/o|w#_A,6Eū*̌3M6QP & ߂͎<3s}<6qaa,˘Ee~#-s촓Gb*;LJv@S >S6vŊЗmTCx.N_H\CQ7Ge?wc9c}^1] [[\ɴvlf.>sݠt@>}ec|2.B^uYs;Z'}8ՆkJc?η_@`b=Fq8 |X2Yv$L4;k0DK1dJXT;C43TJi3+#Ȕdh:9qtd*  Ҋ2rK.v F˒4(P0w&I ,N sIİ@8F[*%Nz/T){ zW{b9(RD *3SJPdݾ},Dg717r{};HT;D $HGa5)<[r*RU95V }$;\44p '6iEIإY0&T*;Tʗ-,E=)DY7&uf`FY; Zdʽ3ø.U0#4 U&G]@ p7Fx3v̽v./>#VOK;rx|˲.&gѣ0h MPA`XWu9(r;WR8┲I-`SE\$XpO,e<^QTrUцJJ*p9*JXzf@{4 Po$.xx`0G=;o}oxl5N!,,d@HT]]Um1Oq7Q29u? @S$`%̥:$#XLj wN2VP5M2Up;oB`jg>Lg,^1bJ7P(tt#\"j\@]_k)dɑ˂F6$6)F5:P" 6v]%Fj%^CP-<H]Bڨ+ WaUu&J]RuJhmTWLr〨Gv" `@Wn-75̭1)9:}>GXJ]|KqL|=.r\|$6N PtG~Gswpt Ack `hʃo!z>.$ɦsa &ʠƗr.Z9((疊X%h^"-ped<(VJ02}T%m4Ky`0_4%z1y T0{H` Bufmi;*ld*QIٮ…-$fP`T~l-X]knp o=LGᧉA#r"\#~3PWww5m8tϋ %]kB-x cv%=7j[nF!W=\Eѣwۍ7/W7^nӷ_{yx-A/̧ ^oF Dnۣw ݍ$CI2-)vq>\'uk$s* n;{fڵ6k+jB/fֺ1GP[}>(P+PɛC>{6_cz% M]E@>লv{4ݓר5ޚx(>9r?y [yG{w}k %CQqn`;ylxM?>yѳ+o=-GEP ]i9Ւu]QVZQxCd?8,WRYuS`{a47'*%f{(+( Q~)|jʕ+J:~P~ !Bʇm\Kb-(Oe_/3 RzW|U$/!wWjoP"S"Zл'!MqNW%,kݎel&*+0往űFX0U.PDŽ&T[Pi\Nl?v[Y^v\#p[ָp; 9;d5w@>vrŃVg7;'B8EGd"[[xn/qx_A;d"@SVsb )9G~ԋhlO+B/Wv=LOSs]bYW/S؎3aQpǬڸ|]?ʒUv̛Y"q,+o_=y>+Z~4F^F3эlV_7`^NW;]Z0Zq`Nrv;"r|]DXB:]=.2N5Ŧ~P ̑ۻ,`sۻR2QV E]zF5Xa8V}5d{!^88\U7릫nX?k⍧;j^Ǥة:*k53_<:#^J zۛb%_9xL4p_ǯ::8(7CwГ ڛWw!)IS-P:}-`~|pq۴v☌/[DȀf~~;flu7b8jTӬnn3[[i_{3oǣ fDR!"ͪp9\eyXj|uVܾkގ@M/̂6KݭbKnnguD-J^:\؅ǦWx _[{0[s w;^v;oyⅧP`䤐GvˤHδ?Ft]Z2%;=x.2`Q~0{pus= o|.F6s3E_?mywkoĎXIȑDfa6.rFK^UC09;$wɰ{gU{3}+N7}KO]>)V:`i ϬVNߕFp\)PY]\]ų7s=yP..\hTܲ">gt\ʺeEj:vݲ"4nYЋTȘF ִ>NBTẨ+ J5+)H&QW@.i]U.}H%獺JI%%#uQW\Z#ZN]]E*En Sk">G \&LeWWJ֜~_pkmȲBt z?h3`Ib-,iDnO[H2%"[ M^[GH1ҁZt .ȫU3]BI*+ނ@W6=V uV!+8 ]a]+@I$hB]yA.E/tju ˁ.Jq,0׵q,9bqoC%,BAȨPB[B4j_ N깟,}X_<00 3-Ê*BEƅRKjQvJ+ *0%:h1W&Vp5}c-]c%@14URP]Q_rC.H0RJ-FIH&-aL[g81`DO&"<6X{<1f`u&)'^Jr i[iOfx$d0IdS'm4Z@e#6A׏7ᢛi*B>5 r9g,]/ |Z3xr5߻_m~sOܮvD8QAw'.NYp\9Y{Z,,яʋ2ltrɫQ.Sҋm)X_#TDhċm԰ĻSf[M_arirNirR?L29! Ƙej&_ #U%6@:DǡV)bӄ7_QA'>auXr_d%nj}z볥?m/}_4吵R){¶)-}3}7|:9ynY_vYkͿO~dUn͢tK%㋚ZF:u$LtI^'2TJ.i!B ݟ7rW&Oό9W<+;R,I+,[8c1bO(>S1?8:?`O%f/G+xxjX\~ 8ǢQfTQAjfq4TN Z3[ yȂ̝=:FV;WaNC=D?1YoBđ}<[cz=neioiPp(>;>D`h0tMz^9Ҵ*E6JF 1E b1͉ u9t뫑9F>kVnlrZr.}q=Z V}.D} ]\E_ʣetQaFJ!HJ! b7tpF}+@+T_`J)@WHW `s:+{!ۣժt(p:DWaӋBHJKMu,mɬ{Ry ]_z+p8k ,Dk`du WoY{ԪH6kcw=) z9 On,r,Es횹gm}m?Vl;i~kMÂvV8};>'ZyZ;cZMjbj6=Z(#RUM` z^-Aw<\"BWm]yb/dæ#t 0joU;L`+յ+ق@W6=fJh#fQȮ$tutE8a 0g+W} ӕG@WHWԟ3zDWཡ+WHtN)n=' _߬YΕZ97+ Z\d4xnY6ף:%y_shMg)t[ d 6EnC)cm43|D~K0&uasJoĵt8'xBG[Ή3SiJ5Wk9lBE5{۹8}z|0Yf.lvo&֥ϵѣ[TrLLf\uC|BjygWߛe+W2m;͗ηfuS6۱|\'yERPa4ch긪f22O>S 2,O}~crO ^[Y,OUNh~t+.\[r{>׏* &0BHB>uޝ=T,ݓif{V0 I6 !e)uom}SUs+)w|#ܲ?d>!{1gӜ wtrQ?&>:Kmrk/E҄=~΂VymYb !z)iZIz'Fja\>*OLe,v|g7x]93$￿*`zf*;rBv4}0PK_kКOdYjz|Ufϥ:j2Yx!_kO?1[(+'h9*KJpĩrpßսbjf.L~+a,>pvW)-MP,"&!j+5 kЛp]񽢫a226\,rM@fgЍ*|.>a$?BYF$ s *Ih,5eD$hA #(갊H }2 7Buq2^Mqq6׷[نAk^Yy* Ɇ^E8f%(Ʊ? ^$%F%'2œR%N[lV͆ yJ=N~'9 J\׃v$KՑ|de3?fp5]M<֣&'f}? Rk"KngazCu:ZQ>AGP0EH; Nh$#𛍴R1-bGL-+[NfPX5i]0Sa*&$깗ٚ?ݸ̠q"L?(xٵtހ=4 =}{n6[+M?ggY`FnvjV&딭EZJiړ*;TE= EԘL42W<!^MY͗bB)x ߜ@߀1jh D$D |F¢AoϟQljS"r XQy$u°HX$-AGX+lBHDЄ "H>zYajpm*kCR7^'^WHLˇAk:ť'tD8^8҉$LP,D`{nv}z ./&|k8+-?v)~6ۮ ^Eo] @gIIc b! KTb#HiTc&Lub)"sAw(ѱg[$60RS6(S ql:#I8Q3iiwb$1Mb#J4"vx؈8r!-'i'ԙ gyp\lm`@oz BdQBy Y.w4gefEqWA/W#ׂ :>soTr|՛|NGM}q}r6(l/Ӊ}u pN~K?'Cgq,moGs CH?6~`0ֽ+-jIx8RkmPKAɜ,"}VP >L淹.'𛙁0ruBQh(Oh|jh >T(l> ?,4sL\`&9Q2>I(a~4 R5۸[$QpNHct7ƹ#ZS!e#j#pށ$hFxD0in3gnGn'i3'_`Xp\kK).Ǐ&}#)K:GUW?GВَ|k_()ܤW"=> [ */JbDyRmU.Q0~b] \ nv+*d0u-'>I!Ep(]X6Ǣ"sʠ # .1"C@y*rU{质|\ՌQܧ%cX3<gmMV\f1 1KDE2Z2 & ؘ|kOF~=Cռ,fI˘r[*n;vͷoqj4VOBRPOvtbFÅz}XSl) @DWbJ +|N-elϜVu`晳H˛P=F- bhLDB] VBƛ^Sڕ5&! }` IKi /6K\ۡжNzJ~"'}g{[~=% 02CߺQ`r: )oӏOj!]u:*IZF)z! +缐b8ZQ4Ax\9Z㉵r9M}Qɰ!~|(-L i< B@@-,!nǩ*%7z"O>ղ-zt^dEG1iR:Bq&z4,y3ɋ]'-䜆ijC*eo1z;)5AszYvt n5(l.um{.fkm^zJ͠P)u9m0N%)seb%ɢW]T2]Yʞ,&˜oY6IW۴+ ,؃:J* t}*^~l\kxyCsyًoB5Šm}$ ֫շ_}ղ,OLKNT^4g`dU|PB{69goEԡu`ǡ=ۖ#izgi6omtvfH0ΗlVk>|}޴N@!k^ʂi?w`\Iq,K\5qW1T^V W dE˝x6LvnN *ErY>/rζlC`du(TI%Û,ZYԕۻDuRlOI y*ĩﹽ]◩+0hQWPJRLnx;]N[tix=F^ 1O!TeWq`Iæ8]04?.T7wg]~]^ڂ/R``2r8&E4VH V1'йlvc9p(fKsaǛˈ}9%2ZZl (r0)\)*)m&9sv>q:rk1? ̮Ea6xiS+M/ӋL;iVJ+z-ͬBJ:+bʮ1s6Zʙjףx2\c?եwog˟8v}v_+{ۓbhנCc@ V -@P}avڮPy)sVl$*ǫed`*$xE`_(zXgJ\"'FYLA2Z#{$94< od:gd%hDj =WO5ۤ'Z"#w%rZ*kl !$0g{`ϦP!Ws=D{P ={!EcMf[W#;6m+J8WQ%cz b %/GWsƟHr)zIBT f\*a],+K %Yf- "bཡ`a"CI (?\- f'/'u' ru‚ 3 }`nXPՆ)tQ2mO =XDdmqEmo:)EǦ;%-46''wi gT.Ȳذ0«#07;(jSdϘ.f66sI ~!ڰ@2x`LNxqbzu*4]VJ9iyhƻN>;e|I}y5[[av%@B6vZWxFџ$cM3q8Tg6M8*JQe%  =?Ow}9=K6]#Iy_ǡ };!{%P{,(@A1&`v/o>~Lן?4Rm~S_U[S|ͩ뇥JO_n?'}\r^"dH}b/.>Bg% ޢEӂX滘a"@v9oZ a-1"DHhQ~<3|~CpIHMgp!7(zixc,CVrj'3&[,ݓ, bxNd9嘸"; 'cS }Vu2pþ 85kv`AE8xq^Oݙ[ٴS54dl|YVu5[3kHGXzfЅ+y~ _l\6\N )R(+ Ue)^+ؕk|E ?"',]Y•2Lzk; cE !Vvz5/wY~w*XGNsm#JHɪ@Δ>q11*hANV́6di2$0__OsS4oyl9Wo/jo9۝@˥nmv@/fU{ ߳X([PrkZ+餬,rB _2Q8|{RtYVc4.eܸ '< kXˍ1ԾEzo@m6wꌷ־OkRr)iz6%>=-8d5kA\V>ҕ߿kw~dBI5ېTFA?+^~A!IO,m@(hW0Ǎf)Ҍ:!cfWuw/bi]OGC-TT,xRU^H1f*GcHSl=1gLk8WeVءNQ!pYmz[6J|n֑kI/ ׿ if=[QdI";P&h鵌l4)CA:] SH+ZWj&"ó{3CMBRy֐Zhc[[&a=-Ƽlb RKy|> +]pgM-yX…46R3hCe-u)v ∲Z7  FN&c4>wcZad@uQ!1%#w(bN8%uZh&YYv>vhi5vVkF2xAQ򣫫=_SߞJnH2p@1_pzPy~IǠl{cBP ZeDYkjFI.rlp`ٽ!>㣁[.0*xq> ]n] 2K9 dYL8EN VmmTp [EX%8{sڲƒBec<&T'Kϊ7(_ޮ)xpXQxYGް6dlbQ ,HH{y7^ l4@ԇZ:z}KM)<5`Bj6/fCrj&]J7t51} ]y>t#Rh _u(ij ]`CW.cu+VU+@)0jjJS tyVcm?!9>=ƛv_ c nI}&<̶f`G[o<)t 47TU1^'f0+\օY)0gV@Id00+#: 19+WՆdʣMuSJ0]yԆ.Eh^u 7ttőioU&؀YZSsˆyj[ke*r1 Ƙea>wLzJ>_K|vYc0evA`)[a,t8m ߩݝ>xұFQGBFQjˇV)bӄZxԨo!d\ qȖN"xLO-.Ě 0~mqT[j]xXbaI.B&M7d2ƍiXE6PĄX1BSuKpS&`R'(q2EMimB%Sspf;2OCY2tHC+$3zFPΔQim0hG=ʩWM{}ؒ]v(]`PmŵhZu(ֵ+ҫV<`Up ]ZW~GIECWkHWqd ?]ѕKQ]ʣ+ͣ DL !/fKۺ} @W3\1"tUmj.sĸ,جzV`&z狳#󡈋{"g 02jNPQNz#vʘ2Qh|l ^ ">d_z++Ӈ`(kKQ'.W9L?Z>Ѫ0Yx"“Ĝ z|iiGܑxͣZUĝ`/~=HNLpSI'8祱T!" > ߄>B!<+:A[xIO6:m?]rZ_/vsOK9<`AjVp<8#ZV-ɦm5׺Ft䵡+W׆$dʣ\ ]= ]Q.5+Xׇ.Ƥ.tRtQrH:u+W z_GkDWqUpCW zSkZBjDW<\MBWtQNuYZQ)kDW-{p% ]IZU\4+%9bFt1BЕGN=JIZCRq\Π,XmU.th{5t4Ȗj&%lpŒ2̆V/i}9p@WyUwV+X%ׅ d6T刢Lsݘ*Q4:Kuٔ,2sZLZfRDJ#]i m{UzGPm+ "t[MqthsdN|$s9 a.Vf1D!_!\?>Rn,(<:aX$,;4-AGX+lBH"D4!YغʅǍa*5"7d__8=UV l{|wo{w_xzw3rQ/oHfSAmF_}o}W;ۦL`_R׉ߺ"`]xGmdLlrك<[WmvIL7w2u7u??l?uҁB|x20?Q;8=(Ypx? gӮ- z[ tBo {_2[ҹO=Fis_9UgL_?-H?Y+JVu@.ݽ+:=~tpQF}h>*QBoEad"BO "(bI;؆VU&J>V3 q9`yru^J L;uK ;0ihL<;oCM.] q0c{榎 YCt=as-VNG(f8H ÌМ  Lp(mCIE |1Ь A)t~ =ʂ(}<TE{_<@?x5>6'O f&(G2wVߚ'?Rԣdǭ9uֺkir֚ɳG+p!WíǛOoon֏O}`bhB } I֣Z`4A~z󾸐z?Giٲf*Eu?,5Iy2zn^?݁3>'>_68\w.)dt7>7,Za|Îs@ುJ}:wwƈ)¢wxWz<޺_pn2$M7aO1BbG|OQS\`nw VcD!G†,8PV8-J{Q0Ǧes]\7=(vN|zpc*QU|Jrq54ܔ<[I E7?n?ɬ)G0Vu"Al2I䫅`~&͒$Ap[L4?!wr%*UR)KO^qN޷{rIj#ᄸ49Skxa\SNWۙ ("9Gu\ ~0szYhϚCkb4Gxw{?v0(k(%\% EcqL","E eP>/I^A"pQ^Y%w/?U&;#ô/cl8fN%LHNI,,q_p̵LkUIM"1E$bc)E&FT CԚ()I($1K$au1' 1^,1.A1Ҡsc -!&Al܈i]BB*RX;h[Pƌ9L IMM摐7*5eͺQT7RpnP!0F$ cN#hIq'3JӢ`&voAg5& m̄[A64$IZt 3,rJ'62 K-fdvA&o(2%4AG9PvI[(#`(U P* ^ũDDN R.ZjVD, #fsv >1],2RRJX%#`I.T1mW*|"e8!yF{(aHdAUb A(ZD D|ߒ:ٻ6,+\"z?Yxlecm z5MjIʱWп&))vGԊMDln{nuj-w}a\hM#vE)1/Z|!mbaZR u` bD-XB_*DO*Rbr[.L" {ZOpk^L2%68ІJlB:J} #7 ƌ6Z(b&` ז匷)D ;2D{(%xksAvɑ S,XˈBi:N+OF4GgY@EE Z AsHP mgdY83i{ %%k<Ī6ڔ2H4y< 16Vˤ7)I^٠*.4h(ZI[V6=cɩb xYU^k2mYls#c ehnPjsCPP*ؒqW2 m)k׏zgۢ[tj;.dA D*W`P4WL% S;0$  .ecT d>rR1;xୌP?i2C58H&kr W3dc@ȳBnTzCJC+q2F0ͷځQ>^kb̂GL ̈́Ü/ -TQ6%_*)ԙVNecDƥ dEfB8Z APP{T4gԌAQ"Ł.0̑ւEUPזÛ=%d~􅲎ןrA!NUDAj]ue"R̭.-%ݫ(IuRlB}K$F^,.zVՊ !Ѿ*`H <J>dY)G DDjnҰkMBՊ֕b1A ԙX/4%ړVK3fHNV/*08/Y^y0;iLhpV7[mX7U-x49ac[-]`:m:HBK/T^Z*Mh\g%^E Iɭ r*=d`1\uLF ANjK %tD\P4I"&iY!QJP `j*>|OA `|QHVǃVH Ef06y`Q^,TG>ONXŸӬ(aۂj2P$  a|xO'>/_Ÿsu?‚.&ڐ"b)VX@;KnCz=pi!/zsP6Db6y Zt Fu $$B j!xYBGmhv)0VcFX 8A9DFgH!t9Bhs^u@ؖhd嶍J'Н%@%:)A:.z6+I@8KNF|d@)CeCƠ"RFdC^es,0 GEZb+'R% fcv?h^[Y ̢jTRpP*Fi*U9-ދUZ^!ZH̤E>o3M:kPX}0sɺ6`-Aslؔ`q5cRr¹V&Aרf PMBV  Ihd*"ltL;oPpRHuܵ.h֐D}MyHYmhhPAqQ@b^eu_2sXV:(R(]&d*P(yDAЁJ$5HOO J5U_YúI!>^L"D W}nB\$\1k 94R~&DyF2aNXPQˢT(:#> >YT,~ xUڰ(RVGϊ54hci #72^'5VYrg&ĤOd:"TSEcP?e oY#9|sPͭp^)P5Be_́;J*C5*mP Zx=XAa},gfC9 ZV2С m@ >bM F:`GSdF)8 8e]sAg[1hxx pQi0ik̆bتn  RLȎYЬM8 IȰPbɾ.HPR5>]w*;cg 0H5aKB'vr7N&RnmXHU0<4>`@T# ^>%HU+ШcQ rw>,:/wnfT'ㅤX,LR:BlކhTWtRdnծv\>@ԋ.Y]c{b2Jl.}hppr8YR< ~}nqIziig%_`!D&ubs6\u,ZT+Fh/2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!unQv0_?lQp-ʷިB.2ALpAF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!m5hz?җQpM?bvP#]4 !NF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!έ5@Mz;9oA̸d'oQuQG9:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCF2Q:d!uȨCFcr;+߭ypح~/g׾/׻AJɳŲx`?ϯр~|Eq!6WTgc6@@kn( _]5Wl7i z~E:тQ|^s fApvO_{γov_޽駃p<}JOg{p8>MӃE45.*@}H~;b x8z1q?tO=pt3az_q/y~h'/w}l]Lu8x8WWw{Fʕ/^uW;{ov^}F)y?~wo` J#9&ԹW[;o?|̙fq%e_v?n= 4LxR(k~"g%ݯy~',r'Wu,|Z+ x7϶W?,{8vNwO:)ώgF/% ߇ޓ㭅egV)6`'l8p[Fa7u~ߗgU|<:s'M.Tڪow<%ɭ̮ya-=][;:P@‹d`Ga3gE!X)edsJYl$HotM6lޕym Los`{jr4' yݘƩms0Vx8>ξ_φb;8.8iaڌYR9pyQe\Յqe5z T3N& "RȀ tEK"qn_fQKO֢\ ;!Uo7 _Ԝ :sM`YlBT5V:*:9;u0\ExtG)_z:lKFO=$agyZO׌&od4L':%X^E5|oفG h8Bfׯ~1~ےԵA5%j Z -h4+vf)N۳O'K$Ե{q2; &p^Y\*㭽9{ϻT_'T@])ԕe1t[ݱhhCzjtv;fp!H:GMΆC}IJ遳ǻʦZbjb8L`I_73<΃:&&UbM۬D3;?% Eնt[`?|X]~*A,eM |P `f0Wc|P0 gWNL=-[L]cN1s:Ev{S3\2~8 d5tד[>Zll|e8?qSt^Cy.6h& gVv}=\Ħ<ZcmNqzs|'811team ]-2m+Ԍ.ҕAtN WM֊NW7DWwF +n ]y#6BW@حWW@i#+yɮ}u`W+~] Zǿ[ RJ^$znDW\̦UEmCWnf7Mɍ M+m~骢<3HtuwJVǾ `էگWMz(HW1;/66M+5vRu7AFtuJ[i·u>q] ֥)x8PELe]0Ofa%ݱ_һfltdn7`޻}9W] <=a5XhJ yߙW/nLkwuԫXF?/Ool5O16(QCG79teQd,ur;I/붨p6*1D=:n6D]7Qf5ޗލZjwW2o>f/ugRӠΠLrhH|bj 7M9-+/SpAyWCgFַRForԗ6'F:&?[pݞuv{Z5j#kmtݗ0Oir귎zpLa4*&կ0l?8#i"X C}h8:A+ɻBa Ivo "!qæ5UOUWWe^twx_yݾ 4Gm;}.wk5>L=&s>G |7c#2I,r1/ =:m{Lh_cngl,_sr*+9yf]z&~iAҰWGo~=j=>y=Nz6TYgoZ'^IZ;ZlhX8 aIp0&FuȆ jJ[?Fy9'[ʃ?MsvxuQ7MGt go^uiIǥp5u9oOP>PK噗D{ґ01XwR-%ZG?r6zsr7dpݜj eL5 ;S +; _Xsc%bVA9vF>e FzS n}OZ/N 'oZ'N᫓_^MF/f 1T`@i S#y*".zᦖF }i$j̸Sa>@L[ XA*`Aq*ï`T.PK a=yuT4A0ɵT,yIoVƌƁZX 'Va8ZI4%wx32S: UOgh;YԂetʏ:RyN˓d0KZ؍is${Ō+.(ܶ(`1F13TL )&֐6'Sz9gӳ6`Mh2iYOiz$"r7`2(bzkMI8aq5q!n9)Wy5nfn&/~Vrhm],9B .YJL&0"~ ]C[n0:8d`o8?'|3?+â/A=7uK޺/;~brt3r\%gZryo^T۴^1*z5cvO/bO_[UL2aۼ0'=q=91SU '7Ҕ%õܖЪ`Z=_Tv=8ȋ}UNy Lc= kK5>*5Xh xFcTG0)3xu_9MOژB%;)8` kBHtsgi_)uFi!(T\MYjS-Ik{ۤY,R3JfmҿVdY+vۿpw^ֱ7_|Q?_9hڋ0խ//cw: !v*~*dCKo咾I5wIXe]ݪC[-v o-GPS pjcc;r;;&;r}f+FfKçkc!Ot3x7s|v0qxn6 ة epjV et,]Jw҉/0OPTXpyoKyo须F[M(03 c?:4ih 5d(¿V8dM!my>y/y pp3W\8nKG7\[KghCzP=hxW1 9|AjL'vLCr~f!7zV$Iou d1oEưN+0n!Ƀ&dYX0 f wMuG@_5X"(pEb(s/~{Sx ֵ,)Za(`6QL-VRQ0Qki;,]:7зCdiㅑJ vhA)ı'^q/DLΚv #.<1]n;{@KA~2=r?脢aaFCc!|l歔@#%jl{|cIhePaetGGf0HZT(ˇo `2w$*[2i^;@vDwK88T3d #0AIjt`еM0̂6CGg{k]XN4B'*X]I$ 3Y۝%J#|\{qXaҾ@k.+[/.(da07Y}U'͝*7 D{.Wϒ ŴI_юlvr7>q/5ELH8Xr+\J3/7z" )$#}Jq-r8L| ޘe&ANݢQpLs|[3yvY3< )jtTۙoVa-"q˂ӭIeXRH%!hSZXˢ1Zu H%&)݅kYз6r m똱%OEo>kn-B[imnzM#)I|Mj,gf .Df3zUmJVcy[펴|j6`=GœV+'B*i4ŚJXJrpz} J'H¤`h m!M$iJ),3BR3u"g5>K:1/U SRj846[- {mT.8yQDs)r JD"L a 2M`f`Lw[eM< ٞ &hb%u |-) 7,4ĨQ3XI+API( +KD"# S"\Q A@PZFΚr__QcqJ1bΤzi:{sMB 1p0R)t{߉@"pXQz6(ǽ  Dl3DhIPDJjܨ_줇o{+D "\G$ !V"N)^(sָbIb'?.m #z!Uxd!C>HԔ@D b FрK)c"u@۹?B /ΟTQ A|*`~/_zM7 1A}0A*1j 3ꄷk1*5gGA7Z0-BӢ3qUvgaT[Thua߇Q 4I-i׬`RGQJmƌ6>fv_H1JFn].vjU>CXNg*O/; T 8V|{@(m;AyXtq)+g[Ƣc\gPc`bҙVgpb#V<_(XOIO F *8ʝ7\ Z 1b"tj$~B79 @AO,^fَ5yt{קf*t?Xxئ+U#1!]_^{cLc'3C,LmZwۻ7I_.w1K[ݹe$/I[G8!NBH -ۊt,αfFѼ?t(E֐Z${#L^L6 j*ӋTM)! 3n)[tGks5i&xz{NOAwdR:vH<V 8 S*1 bP1 [̷Ge]نMQ`CJϿ-y:Ws ;ϥ0 m> \rmy8-EXZ1Da/ {W4BW S H߁o'݂e=).J\ٔJJYt )y"02\pED0)0̕0) WBߋjkABޭ?P>C0xf L8z!, )@4U&XDqC A1EK;!!⮷ 0a{6|2x60fnIIH"@p"DKUP-oNCc }G4{!ǥJZ[¥$T{T:lA%Ե)W hCi 5NJ)l Dzԫ;P_^jPuntfq-[>ƈ|?pUOa}x\vI LCI/.)F4%0Md<6\ʉS͚oT_*Iw>cdWFZRc3T`76[$d3J72I 𝚍oUE 0Ytz9H ZFC=D,RI>SХֿʜB a6x9aY7ZCW;LX Q<0QC MqTҋJU%Ju'j9  IU8EE]3S $.7o84U_eFGGTﬨ a`F11`ag ⣺>Big͒lNm%t{Ħ̢A* *ZG9DΠXxUJL*PwqgYT Uh'U\)CètxƁ]iQY>]d'@GM crMҀ hPo|}pvoUü% ZWe`_՚YTZfM.mBfNOhLcMB @D/}g_AICT.>wfY&/j;ܖoʿ$Y!`Ue*kq\q3.Lɤt8;)RHC yetf$,WE;NIOreCM|V0dW*ғ @VtTHҘCSc읊Fbnޠ)nW"<}0;Zhn+@+wX2X&S;tY7 k"1Iy̠E%$*':]FY/GQ =juֽ+aR|޲5 2q QN[|Luj:EIQϓ㳃㽳ߜ>;8>x=:KJԁýp{pkWoڳW54Wmytr)(TxPuj|Kċ(jjLPaFRH)!}%Wr@Km=Qf9.&ǫ\ /OU.D͸/'%Bt-`k!.]7 Hĝ$F;i#I#LR_ 9 \-C27 F³P;J2@! NzpC\LM=oaUf:(>~}OU:GEAvC/ e($f c3  9fd_..[sW_pb׀41OíG6$*N;)714ȚV5SԪ&fjd (/2-ⳇbxg`2M̋ <[AE\d4Il3h 167"dpIKX&R{qDŽʷ6 b41md~HYՇ;uA7^|G;(os008=NP2캄ܱ!G+25EҴyϾYL_fO/L6FcN7-͊dNQ9!B:+00g;IF_ q(@Q#o-FޒFTc䭠.6\9]WWBkLi :z2OpeCN^TQͼ 3"-v6o;ngMﭞGWr6>(K J/ ջ#Ϣ9fIh2D˦ԧ&X4% 몠nSbyJQf7f4] QMt|'0]acOL{Ol 5zc1dS}I֛Zmq f01͞l'Ԟ` o Vۤ{5dR" ɡLv̓25iyP55mۄR&B-Dt_Cx> S[om?N?ҭ[7 w]={ۃGuyK/vH`?1,什뽣7m-Lv֕|[ PDq+ oYWyU;ּrF+O{~v}KFYZ})?U=~wm ôn 14^HR>>~sӺ]v2ШF/zF5i,T+Zo\G+rT.aFm)yERxQ/s}Zt"eӃ][LDeQ_ޛ!L2m?kwzWw_ÞfkԦMN!m) ̐-OvtHXo_}8//|/d̮Dd*U@$[#ًRQ^[&G/IN h<{ݲZ'ő(xW=x79 jBQ(J'os1#rg*iwΦq|j >Zn Jʯ07YQ s"szg n֘v|p%|${oYwi#Qʷ.?(^A&QQ"/YxuڅQ1w{^GM3^'ཚ k>lfڊ?$=LϳMZ2ZAj-?pˊh9O+Y.wy|LqvҳewL3dsRDZ+$֛~|[zvL^m?;~n'4Zp?g/Zf@_+*9PF_q;ˌ iySE6Aj82;*ҹ){VhA'\r?-1ZC0 ,B3||-)A(7ɍj_û'^ ^*;flc4経/g@1'IXF.77VC6zu'b/۟O]* =BXa9BEpL ! |\R5qvZ5]o*:Sz$H 1 US@]D.\ѧq?9O*3pΚj V:[);3Loo^n-O x2=?bFD{ 6LȈ QWy :<o S6|! g.QƲ$]2ev9hמPrjEmc_(Ih$PpV4k$@},EH Xt5ٳ؏i'6LG@glF_43d~6‘FӯTm iVPQ-OT@@)I ]ME&I;- . & F[KX @EcA,er@Y$m KjG\dئJIX`L:`ڴtp*{AP4qYbKpY*p=o>f;VRxݬ`᩹e1X\Ј<,FY`)|,>w{oNܑȽ{(F&/j ;o]uPI`O9}9i|9zW?ݸ&b-qЊf}=ǰDG'7;} kE].P aq@|_%';^8~s!QHR߿H@'G\""OHh ]t\`i0gm U#*kM+5 ]q?wBogHTI2 e]I\oYu@ڜ? `o*F?$m: 1IvIΑ }vc)lk騟JCNv X9}kf(?>N}vw/5/z*nWZ>,iʎ;F>&F0u5nAend1 mO" a|J!f 8G7^%gEp]{v1ߧ] :h^ yl_n{mr F9PVb?Ϯ }uʪ+0g/(#QG`=E#1?`Oa$\ޒx$tJ.G)߭ k˘Д-YzHOɧpJ: MS؆KʍLsc<h8_WMm{# X~K\:+ :, *X`G0^VS_-ppXNe͂[RlZ(6Oɯ&z "eh^4u <a W@t9Ԛ;t=[e7ܹ]o$c5y5 Q:<[}\\4 9Ƙ9^@|jJ0&hJk.}M1G wZ;!w<[lp;$jہwR#ګ/UhӾ_7+di_ /e*kǪ*gt9+"q,+'{[SlEu/TXqю1w'ZxJ"9@L\n`{)9\8dq-)ò98LLz4]iT]0'o8D2WLj;<"Xvgn)(][x]FYR4K1K]y 2)O<8ܥfSfRfר1qJ՚m{IBTٌU4LhI&BU)2|%jx/Мz*L./!sX67r>=_B{F!om;cvc1h롛zj}#Cڜ"C/"w r1ʱ ,21zc+4/Xb3`J" j.7ZG>BTtMUȖa9k"?pB)qEi;[)-a$v^LJn6\1##dgߌ_f@i#O-fyM^07}v{e-9[ҩ{q|h?Ovک}'`sK_3CT잗^Χ'%jFc9/t+wrߜ$Eq*bn2`'>U8a8?su4TPYR}Trj秡ݘ/p1S眇@mՅe[`ܧƊ ,fgNnXL``@]5Vig% .n䖾ofCpI2BԴ\[| ":`A]@ak\P'xAAOSO`46Πx$l%>9Ϊüzl K#<&mGdDow~[f7@u5TiCŤܸ* tԨQ7﫴C>-ùFfaO) L}?<,aNQQSFBI 0hvZ؂`p;Bd8H)BUMv:D{_IˀLOr,S7/QY+j2 sIݐ2nRSDT$gz9rbR]ݨP ƪbz-N l UT1ޮ,Xk98*ӽQa 8"1"Ż+rH5Jc5R@p5Y@zV(3 Rf0切| 3pb*^NZ냓xq>vmö=tN=;q32tL7ux\n2xg7w nE[6f^ԝl¹Jw;gRنt@τh攋4MڨMu^ss5iC,xfx$@Dz W;y Z=SW.IG5 űF!@y'/p50 `E Ӣ6sM񴌻1sԧ;ޔ wP&s/׫^ugW#>"kH}2$ y fHjHR.: n#h[H&_ҿȃ9).ү< /?bBd<5Qo.ϩOtʟڼQ52rUx cl):{]bzSYNͶH[ɍh7,a*SrW`# gNJWY6 Jqv`=VAy k-?5򱍑*mMn'4LG7,}Vܶmʀ.9~EmZ(̲ 2?FUa^$5IާNGm\&mY6?6;B7}2cC1 LδM4`6߷e,lnH?zɰssӾ),מ 2{{-*M= R3?$T&ƭNudr5]= Q|MǒgElxEEabQõ2Ӌ"9F\yPYt1cQZݼ''C6cSܕ3x5ٚ71zXԣcԧH6QŸk}[\y͚HR#^h /d:o,ocLi{3nAsH`g4,N7[#&ɠU&EB/h綆'imPQ8GImYw^|Eߕe=pwj0F{L{:C(\] @*}J]v0ioTk ̿|)z'R$'W/=mq2L³:^ {m*Nc1#byM  βO?̘R՛ʍ S0Tj>T ~}ըҙ$&K4`^]: zqs늦f4t;At2mع޻8O%:"{3F7ٳLny^=YO# t1MXn8Zr 7IgC)4^:xc_Re TQrRPG̸.r %p3]<0jb怖 ⤅PU]Gu$@dgD4>´ e̋$F( M:>%a8,rݯi@O\O 3\`6io3An`B:_\oT5َ@e3$@mDq@E4h๬ad{ԇ, HN)BZ$g8u0ޕqd_))ž0 ``cltшU$DVYE-DN2"Xօa[%f]Ή'R|0!50dz=Ąba$`Y* |!>]RrabٳEq A"cF1 0rƫ@   6TyE0EqEx&@+/uZ)4VوP/m[ XksHSR3 #LQb(M4 %~!^$X$+#8١2i )EL\:p,," ˥tr\D/@B]^j0g JC+$d BvH |LcekiDʂA E"UqSU5J$Xu޹TEoQX$<@Ac#f0!vAd I(oL tSȆոUT&+7/4R2AV%T gԌQ!] QQ+X5>񱖃J }J"fMh[5ID^,.z~Պ!1*`( < BD bLc`"@S[2,pZxZ댉p0 d3[YdbVGq|1|,5hpD9#ȾwE?̰`⅍|}Ek.AU|eM=B&q`<" /`:p<(}rmU28Ut)\]Hh)7VSc2(3(v5-ȗXP0F4DL iY(}R(T)pVbmLm( AJt+Sx[Q -x.cHp VϓWw%,ܶMxYI N;>lmul1ΡN‚.&jH- \{#@ ztm^bTnBl \*&@QR" e@w Z%tMކFL3uz5|< hwX@zT̨ ƌZNmJ'0%(.63jAqX[(&RPHƀ)E+Y;nT\pYT2N$J=y(֜B:b8K4YF%Gj3Ko^5 RYU[fy/z+렱^Jw afs؀$>&J XҹdJac-Asnf+ۣKmvz\\ӘIMa%2 覫2l&x ѹa0`]˿4,E2Y]a I<5QjhƸu 푾0 |殭ျ!(A/Q7C|̨C g}XfT 9$bUrD|٥P#Hh>FCdN]퀇.hI56X~vW$<Neg,2B#ի5BmTg3n;M7II2N Y}}o0V.(柧)>d[ t۟'{7mJ4?jb(S[@)OʢQv3^gge}#pK:R5t{I@焛k WNo:ڵ&*?OrY'x_mUwPgژ䛞|s|DNE6pUE=F2r뮦Y~,۲@]`1ufQCMkLMcVWڌ:ta+ϫtt}9i:+oBŨ"EӼ!_6)Hq9]IјΧ~]E]ޑmpv9i7kK֥S$]M5x.KƿM֥SR-x_Cq<k6v[cvӜJ^mmOͦɥ]!V/͗uηj:E޾_/K?ήu/aq:,9M F~I盻@;!h˷6cuD2$WF)ʖΩ1&I/uYpa.LY+bIba=#h<4@,}Yb`I1%NLD}#7Y,Qz'%seIN %<;?=3Ƌܸ C-?{ߧw|g}POk]yۨKS\gܤQZКqS>/Mtсw`'hr[.4?|j^ =Amfgeu'?m*۬$w6-ߝC7 zgn\A~{4~Yqؚ~к7dl fB+Pf&pqگz`H'ninHwCjP/Og)}Sy!1K~8EV\:p5]RëLwu( c5|<]L}ZouͷN83nj '6Ip 6bV6.6 '[fWgߣe;L' |Ua>ɥ'oJԲ {I Mrn~>NL-nAnQ ) { o?`z!\|3/,;\ӵrl@ƍ;+k&(4qoG85i;O:w.O[}<lw{]u@'$v+3g@by:%,Y;eci4h'su.%`%DiY[_rC#{ֆ] D^=/{>coF8pAh@vcuX''JKe7Obt1MS 2g0II~ :Kwe om mxkwϼ?qsvE|íM% k[ގ-[Á[[)t5kku*j޵'('V `>l;/`¾طk~9BXy 7GS=L arq~Fx;7{o߽Ɨg}YB_aT>Z G7 nà-4]C!R6҃TOƦȑ >6⹶oNAo?&r|;h;iO׆AgٮnxqK맞gz|y/X8{T&p[۳̄^#s" `iCLPۃL&GˣP[3ɑ '-PUK0z%q!MFy̭$ OjWw nO.yC6zsQBuMBx֖֙͝wv.ĜQ?t1^QH8 )TpB T*F5ISא}QނXYhgI ?QHlJG!}*dQgwoe EATBb)Nʄ@NbBbkmH 8 ~0fɗMn30is-S({Y~)ɲ(z,40cluWWWs~ȐTL < [x=Kp072Wq1"+53:G"A ym:^ /\"ԢŔD) Z1n iX*I(Ẇt3qb91t bcJ aRhõ%!40 P!)>ǖDqfK ,6Z_KlboHCu?8C $jr8,̲ɧ:`b$7lY.RH+·'u_;u"_uRMQ>fWF}2g&0j+4:KyaF¥ߎQ뼹VÀx!BB&puĵ<;M\: S&ߔO2"fW$,̾kj%2o BQz<W#W+ӣ1N\<4q TH.My"1LMl<1cSc~ysêqvx/42Rە Z87brǡ| ΑZF YL|Roۉcl19*AGmkԮg%X%'Ui 1a2K_W bx|]KrS<Ίr~;s9ѯ?r}O)H߃9 w /"@/ C+0^;4Ul} $-tMP~~-yQѶ _vn,-,< tg Pl9_"(5]tM} vMd/c%`.u! l]*'e9>]kA~{B?I $ L{0cV:Lq$E ""%yELbl1ɾ~W8%!1e舵JΑ`(R`x 6*ϘF#0`P1Nذ˺NfWRָ78Əpz(|蔢6fEPq (]o2..YG b"X΂BQx?lxmhl+e, ^~ٸ:7sޟlĦpw󹹞bqo/ӘG"h+ʌ ;ptۨQ1,%! N>Q Qdy{/x T)TrjόsWMI*̳|?Ű֑\ksf5v27Q2Q0|Re*4 )4Ls[>6[ vV7O "(TuľUЦ'k@/p=Zmnoϭ}9y>V`B<UX-zg՘IDk16ͭ|ڭlA.|8X%{9d!IJ5jxT>s>`.0 ,- TE#,VqN56{FiNHiv0Mv儋.{;zJQI6)%;z)sGkQ:I(#`\*(}ԖU`̙^`ZЀ=ѵ(=:-o&i~\n+7OAnɧq+E F&Kx%I.P!`V0/2a"{r`вa iU3AbVA|@N@kC8{s/3`l+˜V!!3uS/ H[XA`IrY$532ci=%^4žvUoYE^uw>zkʓ{,%KApz_ܔn7z mU+ * 9  陱\Ȱ??fe?qgym'(/VdOxrvdZFi {ӗY9}V=UV8,y{ nlm}ƅpWdNn_=4K;w"`Vdـ7,Yyd{/Jc(@b΂wDðqÝuw:nPT`J#Cw{-wM͛U.= s2`|>`yB;pu@ADwSK﫽P;I"A*o i@dOڶ04ȫtlr$Bۄ͝T|HgT _ÌV_!~TOLB>V׷PԨֻ{feϻmI[Z'4|b6cV{G>Ѣ3hocJXl9I=햁Zu Xw`TZQ։>z\Ccѵn!򪬨jXqx:8!6p: L-7 ؿc:vйkfn)ڞWZnT{^|GMeDVz lm1ݾkʹ?͎8ݤQWcS\2܋3G\z̭$N9&UHUKn/Ѝ 3eNz6jU/zfgN[PJV{vzѫv$Fcw1DO[oqt~wV I:![ ` ]d\t;O熱 [~@+X<'Z|-1|g!NgqoNr"8 - i,\`0bT 39khN(Mk4|OjYF︥TJ#):2ZDg6pw0|Lsʄhb,ʢz?V.ee֫γ<|5P\eGA;d2}Ǚ)'W߿ɪt0\.mg~U@զe8JwVz\t:&]dE4{imdCo~L<vPkAʊ*eM`2?3qF̸?N%ۛ? )i9}eZO HjLxhmQpNȢG{]ܲ#Z}©_e#@9„+4: `*\`.Ga^p "$_@Yp DAj-YNk<;M)^ p@VOY_[%9kj3''{ WC#]-e2)\ŤK_Oh}.g,no9fj0`Y':t#'m@݆m+%$Xu)x["I#}z);7߫\ _T X^9Mv_CK1l_GeWxd YlSKM~u҆Ja-ŖmBK벼ZڃF?*fUlbK C躍Un5D1 uJ kY2&kgB8OeRƸƆ {~ښHX؄ A2 oQc(AXb78z<퍜=W5 !=)R{H g0^fjFpfK"Ô"("F]Hp(\XDNzjfi˱.uXE0LK *%dG( x$xXw谨|4jȫ[ cckM3,&F`8rNJ 0WH9vEId#U`cS}7` Λyz {mS\~벶:ܖamF>AFr#lΨB6:íZ8,mOֲ;aPI˛XV8佷k-PHjWH[%XSSTĔ+&N(0# > ( e5BI`+I1[苜Ydx^ //E;̡ 2vn8排R|XTH47 I%(D  2Mf`L-μp8;vD0c XI~_a|? 7,4ĨQ3XIҠh$ h!`|HdDd2s$rtDY`Bi8덜=qVWFM$gŴ;C9yÜk1,-H'}B#Ͻ^儃El30(TXO5 1 '[+]'՛fCb9Ik=Em(RHY!܊@  v07R&0*:?^GWчrM'$Hӣ$[燚e/7w[i%WDPn'-5[,QB2)uD9#0Tc7v= y Y<|g+Ci`J)1ǨT t(ǖHo qo&X<آ`a8=C u3z|$, }|#!Cڄ!dc.BgzF_!%"OrtS[cR;ĬW逰 OjRz\TkXb;4%ȁ(יiF >}yˆ,Ih({l y0ECG.lirN|w|6ɻsp3> N{ UAۍjM%ȌZ5xz+q5\[Ff9ŷ²lFf5~-Q1[/vV Ep( Բr};"N;bfe30yX2$Ch;cƌL"^ˈiDk45[hZZKwv􏗪u$:Y:e_)vCR5jRq!,N9 ( [)jьu2풆8d%|$'ܜ\_S _SYl yz\V܄h-R|Gm7 N Ɯ ߊZ/vӮ-k֣l6Ym>W^<:kT?y+0P42aBD%y yI)R:"9`~GY1D 3@͎ r[RNWd# :Hcа'9u:V!!i7S/IN 2ѥ:Ij$fXбN:M4퐚vvH5]4W#4ɯ lg1(b^BlEncnle/MOCh/Zg(PRXsD>g`_HόGqu;{'iu*/_{Ry,i\?\Yߍ_Kr1}-g3Y~M'y#o#䗋 X`T9F$㑅`"iM-a$EV 1"VYc{vI$zܻ_\N ͚)ϒhءNߴǞw0 sY7ޜȦl{ iN-)- Ru%WS4UǡzYN3k4SN1_N1W0N qx, :_u}Vxv|HitL*"˴ 3âҌ;$4"P (O 5$:>{G0ߴz1e/ۈ?%[FW E&-\D RkV)z)`VrшmNڏdQ$n iD(9c1 `t#N,gT}WB%p(dKrR2礲`r;skǿۇO?闟>?~`v;:r15uAgwTViTզkI5tznfkJcwSpiY[{ k3? ^M9.D6E%_&W%ag[_N! 0~@ip;V>.w icf?Ooz~R=@x옕SqGXD[&HX"󓎾0\R{ ]dc舵J ʑ`(Rx [Fc>`i4P ; uZiE|pv]khM&Ж+ li  󬾽컻S\WUaK6Td`TcFڜC @͞}y6#RJ5q)5ǣLmSN0wr@#͔* ; wgwb4~w6!̩L Ng D:dkiudȸΈ2D(zpb@܃sN[QZvW3vAݝOTq.09Orݛp;77h-D". S,g$(#i˕]r$g2**#~?a;8LM;vlkX܏Hi ~Ɗ|a 4*kc&PfTetۨQ1,%! j@ۢC¬:NF86E8~洓XJDi.lrr}ӂa~ N_0G=`C8y{:QKu:~̺U[2B!gb$p&<HI%D"ݕ| AHġXX1c2b=615[!--֝-=*적~ݳ67mYgBPueTNڄ9.30o&wQt~\X 'Yrwṱۑ6ЍcX}Rpu{Lk!y_Mnfz~ G``Uor- 'o^x53ќ>趁myGS\'xus문*Ƨ\ٿrgllS:W*>=a`]nu]ݲ.ԳN;c';gNJ}=*ZUzv2!_tfIfإګѷ϶ 73r `hn)1H4pm7B‘bga 4ultyysprʚ,T֝F >} lCS\N|(\dܜ&3rmӫ?7àO{Tvàv-) ZL;/|{lPZo9dEN*cB_P5d OXx.쫞>6nHJqP:Ă2t0j#s{*Vt N>s,? <@rFϵ,wJÃ&Ώlɏޯ9y>f r.f1F:‘ OjRz\"pXe;xLx@0aJt1ҟ]6\o[$yD>[k1iDj8 |P:/|2<{U8 %B #KTtGI0)Ř Kay4E km#9e` /6#q8 -K.+ S$d+AU(RMYc@53W]UHS:I(1q x$vFvD@`T[#ƚRUI!HYH x=%21) (P\(&>:=>J^T%$j|In7:VG>w^Ճe{;TOS?{[MFzWq&,ݲGٿWiXMj5Zۣ{}7:Q::[Ѱߋ+jsU#Vdvm7^$:İ ?IwNn>SJ=Xk=W|YibIJY E~NS\uNrR)Bq>>sfDIi)2XM>.|4(2H#koR;96 Ƙ/M`_f\,.\^~zl0n:-;ݟwΪ{]iؕٳ{$uSX H,d2TM `8"j#S/ ,@X n#F8ɿt]d&~u;n:VXp hH$1L.q#܋xOf/JJ#f ?&djEI 8oSTԈ WxOj?9y7zp* "VT4^n^g/i;]=Դϣ^6IOV"Ja^ z(YQ WRmLqm\zTe K<#eNr2Zb,PAi)=&bksH]L+h\TlנBWQ2eZ<)7܀e׻{Z y06Dng SLXĬ&{b&!qッZ~I*W~kBcu,9'H2 F+Cd&liB’,O2ƿ:fDV7UcccuI*tY/uZ ڼ-m^|Z] e5|`T :CBb6g=Ie5"R`9=g ٨e.XNdZ!e#\)܂ &jK0RqI[6mQJtmp9!@+k#۞[hYl+K/[ hz: H`Q_nQqd9 2cxFRa3 Yv`ir2Da4wQNN"L9@SMrKeC1N%|:HA'Ѯ,{iڑDnUi`ȥׂEc:F"%*[ֳQ7fT:}(S~ޒQ,4!pQx8p: MJԩ@*pP`,bJ"B,ƤPcI:f;)T@ *EZZq?$7]u{'ήUdV cQl"a=Y 8x\c G+c3nnM&nmf;o+w.FMvC.nXZBP (hn4O1J!l0JRFayP-7#B].֓'%"OU>hY+仨$ddr5 <5HƐ$ 2%ߩG{c˦ZjnY;k}{Bn.Y6+<{;F Q,6$t{A61q:jAR#6Jʵ֊ a_LN~اҊjP .5Ym u) >`:l^*EFD-%zk_ ^Qe Ƃ TJ҈̹ t/wH\کc4š[nƖNc8_`fEN0=*K>9+Y>fq)ztLu^ | z`e/\-x88DE3BB.$^yenLYA b@x+L[ۯr54)i;*x.sGw~^n/fNjgR |P{ZkDz$Yf vt?ؔ}?_ R۪4+;>֣Į//_ %LZ+iFAVKbBe2N*AW8-DnEzkڮAܽt+xq=S}-/Eu[OΆ \)ǔA@9i S :Mh- -;s5[*6.apOKOhZ0-٣Z#R\V@9Awڣo2b]͵i6F#kiVIzb)`+X-[١3kj53nƼp9-V3wdu_.7\jGTt:X qxIh<"WsOKZ?)YrBrT#W|qJjTTtsovٽ!(% S N(/|[P@[^P ^/z_N+"RkߜTztj\Lg~S'|AZ͘ѥYd!dpQ`Z;%X_OgSt{Ms91U p_{eUP2 uV `}J;1;3E9(!4l" q> CuYFP< E&晊end~ڕ4Se-ޔ~$ ^9aE2yq1Zۥ:.#rNFC 1Jz-#AtPXTs=תH>-ܡ].9THƂ.Ζsp u[t[Z.O[wtLp#5 htHFMс,ؼjЎԏyQrf1P>bzӁ:/'Skn{nJ01 sT &˕W:Y i#G+FiqYbarlFKڬ; E`uVP<^J9 b9p)1J-TPاR ]QS)T9똬JHDO cIM`NUdqF;.˩L("Pi(m!!}/$؜I&v%[d{X 6r| \3@x.{4bFU~j#0ɒTo|Kc(d#~wZ&ooN5,4ZmCȓLUѰGF{Ob}R0@iT;s2/Bx2%#xq}W-~[+]*OȌ-يnCn&yEp̍Qv/C0 97d%_mNOoB9fRLU7 M1+2/W| 7Pj6gΪ[c_].|lgv^N? C%1so3Jwە _Nv?|%s8xYCORI#?FY"& E,\,|8_/=yf7opn}`?u1mO6V]moF+'!M6~dm~%G=9jeP6msel5z+ IE 5Fҍ9Y0-@,eSMIOXꔳ -:M37qc6Ď's޹1i;fGAZDd|#(іI,H"g_pK^Kz^†2}tZAH0)<-1D4 1Dq†N'q"ooR3[՚=yqtښ@=2kԂw^MK`FӎmTd`TcF9$ڼD%%Ácy6ԁRi wkJ ⩴LmSN0wr`#͔nk Ŕ:ͥ4ShSZR_FJm (_i c-L{@EN,({Btq>#L&^G?_¸=D>6-r;G*cygǏsݷVGwF =_ zȁltW}ϧs 4llcx|󹹝_L}W:y-Xhm 4wFm6*GT KIw@[ttFe6 j7DͦylpX^{~Ś$ p18N`w5rJgƹLEng~a#'0kdn(6ec4~եM"\̒@,)V$90+SlLF@.} eCn&N{z)lKQg?|Z AHRVa S띱Vc&ye4zlbڄm4BZ":i7 Jt>n KvrCR ר]QQxqz!,$SFXH.Ռ >厫4 Вھ`&Y!UJ1.k[zJ>Ӣ{UJwDe5EPG 88>jK*Ffd j0D|'~+vh@׎Z(}m۟5iq\ پ/\<ƣb>RPE#&jr M''9!iA QzHL@:łKs@g H\:) H6 H{;([4<\*jZ_]w\y_s_2D!$_ZN PBc`"9#HCBzf,8*2D!$.^嶦wOP~嗥?Eҕjğ>T}M&g"M_ M^5g<-`ՠpͼn9jT&ׇYDM_uG7nzp"{&x اDX{))%J7Pb%0l7Y|欽XFzw0wq1]so%nGWL:T .K= 0`0<-:{a` xfPl<cdžEw^]dJ j/5mP>Ȏm:.aj Sw惖듖[[4%"S*D8 II%-2*|{SօoaFgH01H呇PU$\EtslRlq}Z6IyHUV^2AU%h񻁷1% 1Qd~Íjs\ݰbo*4.e_64v=n4,)U ぷF.F>GrCрK;vh{^璯ܔS4gyH!oN!ש 摫č?˫1(Mn ,g~(PqTLh(wKmR.ޗ3aii[(!B!x(FR g3TQx\A36ZEDm gg9g dRߌ~X8ceft]Yݻn 7mnKo,L=+U0࿋+5x uh2n+dwܮvizиPRID!nﯾ}ٽd~"IYu*ci+z1KѬ~t7RK{wTbGp)׹u'P\ćfl&!L{)c.wQk=(-&>]uſo"~of:ggQ.mf~ԦדpL.[䛓Ne'8I#99N>eOp3j eyf_ %Azbz;yh{mne[5^:70a>NOCog/.7ƅ,3CuIA0ZR6*20t_;qF̸?nG EkF77?"6O\,aj;1r" Q՘P7kQpNȲG{;#Z} _N Jޠ AQ LQk;?720yuMFD*40`u%/AFcM.>/gKu)H@6 rvK~_UW.Ϝ3\hZ:u_ Y9)?L @%h9)ڗ A$j2$] wොgd^IrVG0@OaIuBRәO&A1eG)gKgۉM+ySRqȁ;>sW.Mf.]E;`d^Z쩅>c" JrBCs\'m#˕))i鈮Y+ 1jiZN݂m.ݒ.o!!]Zv]UnT":,zYf1wpV  1`f.gjgֻt)2LipQp> "Rmن0Zp 7m NzfY-2\*밊a,%VB *[Q2bI&roPFy0Jȋ{`ȽZ ;)sr4RpQiH؀al5Pރ߀;\)Vm MK>AwVXd9ˑG6gTp!rcBj sFV;p," q{oZDxϑ0' JM+ S.$t tBd88/LD+T4hiB$ LWMabzmlgIxNؗQС|Gnv^6w~(FQHJ-Q DD!rpϔFh5W`pvi7c: 3n8)#xٙ &ib%u r-td4\86>D͐c'ZHP"4݆K%"1UPAJpD`Q A@acu6qVY]$m=Y3 bCD xZÜk1,-H']Bw/ qrQ D *F6#=ĵdZQ-7I',{Q9ב B8hQ1C"%anL`bUt~P(r&Hw&UH1~2x;LTF."r,8ib)w0O# Gmݭng!O!׷z(C;_!XNZ*ehRAҡ["U2RF= `fGwXe9~$Pt!"m 3j}$/,}x#%H!uB Ҝ C\z,7r,s2Jr/汋ix`,(39(k[8ST&C]7$fN={nZC4m'9䫿7y P0vY4a#A2 <XekQLeXf?δOs9Sv]6dLOa͡]2;LP,{餜2Ϯ϶Gx6\M&-'i׺Wb[jƖh±\QkufKߪ|ufS2'VZ1оf'RѾGsvn~7k䌂#"U|8C .k`}^q`} r-`=~^*KvBcMɇM rU&'Ow0VS(ӌ&WB]#7 c(\L`FXze e9g7.`h8v3|+m#Y!wh~z@ @>d%)vUQ2/Cƀ$zi񫣫,h:@oGvdGnȜR|m,p/<|Dg/=L#m5Iwcu(_Z2ڃtSg3qsuDͬ”! yp;7o0b:pdP2^7Q/'L'axs1T BT> H/Z aG;b~5~<&u\ںo;wMㄮ#bJ캗Mⷂ>n3#9kqMcrFGeЂPd!&]œǜ5\yoM}?C/d(2' g 9B1)ʉ~\"Q$|AjcL/] 4+՞Pڀ ܛmN1hؐu r&v. -#d0N9 eMaPȩ$|qn "D!̥DrMXA:[STKG(@J(˄N&9Pcg "$XwJr f9Z覯ju\€k\S-șJ KЊCVE텑DuÿN_u12Y)O|grjE]~F6C{{m)}/"iG!p89ue@Hrq4j0bP5xv>@ , ]3HD]~*P=a= LF xXc7{44"+IL<]]Trsa@y*B@)'@A lIsdg},g Zl::@Hl3S`H,~s}u*Y 9shE9t4xӞ#&U;Ĵgylg0 X[)@9ù55{?r,<$ q)1V`G;[$ H 6@F5wT"B3۽Q@Mv<_ z =܌5}WV 7p5&_`0v]_ L#< z坒يKk)@Z \_=ss9H  8Y`}.{SJ)xL:hH :d&ʓ|p-~Ǎ%[۴GmnĥqEڎY_R] x! K(ԃ͠u Urܲ{ zq :ri?[M9} f !Xɏ:(I#J3藓"J"8iޢN$N(N"bTYesN:* 6I&iÝuŤcߑ^] Ӥ dEVᄧ S ^h,<%oP% 5EuΖsMЛ'0MT jkVc`keomKQ=n<ӯYfu{$sVܸP{^hç9f߇.rҦ5Жp`7aG{|uOnzAˠX*^F3>wOO9t5f8l'F>n'Rbg̼6r7=L-?!4?Êxh1ţ8KW_{PV|C li"h[{?%">kLu˭77Vwn:?֘mՍkQH (3}hY2Wh)^&fcd]z uSɞFDD o3\w^%^w5o> .gz pkG u{%l8hi0Z;u/w7_Y2O[ ɇ6̼NAvLojn65J}pr#6/JF:00 shECq8bN;lLp+ 2^# AKpDbP(uJmB>9m5<U!@ƕIrvI.Lqkg9GKIT +n|$O%"rm92 3h!g[`Kg ppg0۽5 بF.1L?6,%50U)4,6>vhѰQa1 4Zx\]]ۚڤq3Lwߚ\~]k]]ɻwͻkK&qTB-7rK =s5\ӆj%}Jӛ{@>.۝!U;^8!wJʕZSq ҖӚj+F";^:nN6B8~jE;Wzkw[z"7)pњ% ũ`@cFΓ"2e|0rҢ_Pos8-{/)27p%D!lFNZp.q̉;l dʜB$l@{Vl:f}oe^efѢmVLtCĒH(,:˺}36eV7-4I΃oB(V Asx"A%$˥m\=j\˛ M1ΐ\ѻh iKE--hH-rLx_es2kOX$`~ʸu[j9Y\'ˠE@t{rDU)>+K֊Ebag C-RkJ&FRˆ&t6r ij$k&J){تi(a:ӪXcnGCX;dUg@,\͹ ?;Y2u=p#.Flua>84itF:04eB6z y=sY-/^9xњ@8BT^`|M,0E^ض>}Ps1.L[_E<|wU{۾wߣOhs}%=yB;,:eRklx?T~}}^W_g#Ġ̀hF%7r'.`CpFT2ƕ4^hܝB7,+#̻[2yvke-zv/L|+HDG 7v3p2')Pp[0|I)p'9Ya O0aג&IH)0et ^j 2Ghd9!3QHno˱ `*DuBFe%FA1 ;;RO+96[yVo_% #xQhǓO$S䣑K"ɛf * 0Tupv~9Nj9m{x>a.teyc\ !4̗$(Dp*);LrGNJc^wwOŏ3J n`Z$bwm`@U `x&` N @"޻mGC}`rhMdi$7Ҏ0GqZ:E`R<$2bVe\ RpLB7[..Aiv-f5mal0$%+d0q'V2*j}] *g4%pTobE/FUQQeRdbhn}nz}.o7X&_c_r+6PN(JzVdփl]7E}6fqsŠno~%D6żJ_5Cϊz?{Fab*帪=6P|bJ9ˮ Q8>*~Fm^t5_ Vs\$\ǚwvM$3EI$nfWD?ٝVâࣦKOgs?Y#P2R0&b(î8>kG=ſ^j?i{?w/Wo'sv_^uyh0EvtANru{Fv47oqRK m_F8\|*|7s",4}*a9tobP~VEۀX\Lw9-u]fRyhLA 0 Ж0_m7Ze*;F8I1oP%'ڼD$9DM$0!s@z)-@d4p6{ddT%"&pJ\7R3w=m6C,xR_ 檰PʕB%jg%!"7E:F8+JE1&vu&/?" q/ 6D*j,&Z|vL]\64B ˆRD LKb4Bɳ'G~L%RB4#l"JPR, c<:Hx)(k/Q+ j։ۯ= Xl]M#tLJ.'s$&QLtKM"<#jD "(! z.%'eFD =-**ZJ@!r&ZM.q1j͘*U:κsoY@ǣ$-,Xm+oc|uOV>aXj0;Y_ΞAg6IVZkC)@RKOIޝĞjι'lǨq\A .I.hM0:djTD)rcX绑=Boo6Nux/|LXV3 1PCg,"RHV'Sk)Xv?9'х&5?|VdrіccT&L% > " QN51 9nucO'o&]uq WX5g yl'ŨLdv&$?)bu[LY MD89pj2nO}갶YVQ& [xٳ;j+Qz.+EwjRnn 9I1!qPϷ萬cR@BD_kGZ(2oar ̓?-Pp8}8xKpÓf cHl^ DcK%tab^iXJwFR)=)Dx6f.$G5@vk5ױ^^YC QZ @RpjҔ-XʊFD<29h*|fBZG{M5״}jڋְ=(łXBz^ڣ-[;]U Hk3.}A"W&_H(#+8ii2wէ9ݳ*/+a9d[~&fh<9 ExqX66_ Wή׋s\K88+Pb2/=Al$ܯXZ;l,RNoŝ0+כ^oRo+#|EYj֜IT%ΔZvT4 N,/*U;SގN:X}z:&ȖIvt.9sT 1p-)ŁEϒ.}OEǿW8ZT0LQ"<}7ݻ U%3+8V? w^_D,j'# <%FVZ.xw*9#:]RTE##}H/#3wvzA=E#%#\x)MAYz^d?mQB#Cf1)td9A8ioݥ`_<سƢZ%%v=Ë(7Cfᰦ._UuUI&iIRP&%J n+&r{)3W5?e8{V=7;n=ߺ{"6[Vص'7=#2O,g+֧z5]a>b| ߹ݶO|q9ucCm~s߃6@<7qy'p󑓒+Oe5tԈ}F퓐>. 'd: ֻJI'_}](31Yc,Ki}vh lR/ %"F㔶 qչ^ie  ӗqoqg:Q}hPtol݀Pb\S!z7%8$ݏ]FsWs횝=>Op3 hM9n+.-}}wo"T_1,wRU!NG'y N΋j0 _NY l2 ,KY}L DZPd7jnjI5IYn!׺!+AGz N#6o'[6MNL? +2KO -h2J#;#kdx)Rov}-YsBDCr1sEQ![-S pT:gĔ.;#>{9aZoQY$D$xI9;ADZN€ڀLI1ꌜ=KޓR.%"jSyk91HB3DZcA KaV0#k6QWML=zܒ\jwuhٰQןӆ׍^'7/ ?/|&0g2(gF[r?}xt?ϯk>CÇű%fz<įsZڰyVGUf1+M7xvWE ֚C驑G>_v}E+uCunTߴҫτ#F U8's$A\Rl&k|r2,4#Rԕ5vH.Ȑ2Yb1E7rѢ_P:pZD h܃u! W| n&uwɋ9IXv iQy\_ !;(3fѢmVL|C̒H(PEe ؄|k#^־CqOeLycmڪy->6& NH˻Hg{.RGh*vH:]+#:-`q#1_|`\V*c. (Y#a%A{c'c ]UzJuȾ Q['vmU /E"xǭ%HI)<  \RcfD U !T/I֏' eW"Twrt"r)S.3AeLZHϋWV*i=,M-N pZ( 8-邐:SZbrFsO$)mFʜY ޶uDiMɊzR>(?W{Nf!  PHI!Z$k㲇ZAx3<%nSEfK˜ ,MMZqZp^ĢJ!+1űiVsZn'l{o+O֯:g.w#}4-e NAs ,G7"kDQI邵TN-iK[wvYm ի _W,rA·Ȑ#P,TsِQ )8I ij$o&ţ;Xh0Mqȡa:jXnGȽoH $ņ~/ !ot "օwi茔ghH wyjb_[}n1;O8BTrT'"hA T$ɫU6Su&uLn ulie]V7̕I*:fqz𢹘v ^hzޮWEyR"}C{[k^\ElPiw9D7HQNjv^#CgypVzn uZg$̭TƠ \o?쁆A[Eg[۔վ/`ԏ3<|p=$l"fugW%] AYd:`'A)-yǙ2iOт(hy*%IlngeQy81HX:(]9SKL"$pMA*HC^C" Ǚ*#7?^`yoi\bO.Yڌ'_IշRz$iۑCs* dm:WC:́. a`>\`BNFx& $ א]**<֜Jrg[/J#|=h#A7.7rMjT2F+ΥJ@%ީ,\"; g\\~7uԱ&3>ܨV\I> ]%̥N(Q S>X/!'*0z䍳s]=\Z} ma03f:M':,G,$Z48 B0;~Boxߙk}YbUxGr?.Pƨ G·X|vtRQO->wQ m@y&kSP L]"3e![Gօ;Ή{5 *zEN%Y;W( 'E6L9SgHBA2B6gpIEN0؀9 \"RQF:za9xmӷ~! a@jUS-J ׊GFa)ʺ_'ଭNX39>"?H{ͶgYeGS }iLj. Pv>mp L{4 }*LrD_RZ@](w?8:$ QU:z 壕J?w-]g/]}v\eMB?G ]W-ٝ6)i~j黏-W?{7}FA-l̤3pmfg-kUێC @h-RGBDt< Ԗ`KԀ2}2"("X09Y C2~2֛C<[B>rTk`&Fd0IKJN# ur6wl|8kyDe3l gΟ /*Hl=:V/! MYˤ;@^N !L{hWZO=:R%2; iva((_NZ$yR{6yY& v^^AW &0<:zNτ\G݉~A[x0o|M^ۜ}t'0P!E —Qy`꼢 a]J/)%ص{޶, /3iuEo>0Ic4:me-N<ˤ,YT";4;uzwzO5jxLEg#> JKB9U˽)χƮ"4wYC[Cj4Kf ߀V=JmYݭax X2wI^h:蠖`m7MOp|}Oɪ3\lK)K~[1GkQ |H%bdF/cL_kC(mz|t-UŪpufW r3??ʁhd0äOP fh/#(HHLBDN.x^ʾ" \H☁F 3К  D90G%:HcGޜz{E4FHH jbKFb"$S, LuIj$f.6tAc-ўzJ)m)lDzk8 4\,j]=\KiZ68JRt~?Kc%mM' * 9  陱\bxNxԿsDʋYh0/.\X/Ku l4L/}6}&+үsX : )2WE3l9X7(dp3[lzs58 WCcӗbw:}]o^5,nz9HE*L@O7X{))@JnPb΂wDðq {Y%x;N|W.dRw- iA$bh<,jtײ]>;|*z<,2('̞0aX=&(>3QMDS Jo*;I>5Djj>=Ah$;]Du]H% =;$.9]ݦa^T3_m|UD-@ !kS^y X>^_3Z,~wh6*at7]Mb2n)QyխF'A=BUL&84kzq[UiIXItsr3m6(J*oG|_]DI6/ҡ?xB;X5,a(f!o!R }ǎ|r!"(}2 6e>_BOP|>s._Eh(aFg0AƜ&"Fh|J iRqM"* Fq8`KK@ϷӲțK3wFg h摮ۆ!P:2Ba4.y@0A1Ihx?tsp=[S~"Fm[+ -0H>040n] ͆݀NbA7psYS"d?~v ZgKSdl`!EJ$%$9ui=f"h]EOo&ˍhq qݵ/Yy@ۑtp|Qji6h'Sr; )bTx옕SqG#"%`yELbt1,$NsK!: \e#*U:GH12CL1`1NXɹN[63xk@zOãG67UH繧;ueg:^#kBciYYtiPȚ,q@u|\ &:whN9qHL+AudD3`WE0B!{" '^E=`#E^XC Zwo5DD+{ǣJ^EH%6a,yd#m2/Nߧ?T gyI tk4# 3B5(9F)BD> ΢Wڋu0A)ڼ'Pf}Գ Bv>7˙&Au{a{dsE,Ac tBA&@Zm9bAL^Hrhp4BkM̨3h0刊`) NPD2SpgyW@Ǔx^,}sVX}k w ?|n2WӅvf5<`;*_?^rO~+KduiVUwQQm) ǬW]A;9~0Ev?'rnk_WGO[u54gfF׎%{U`jPb&80H˃̐aF$ 4F6v:CQوASi<! T.upG?=: n}Z`[35ΐ_#:i6O9栥CsKFD@I |_ly L!ʘpEA"hPP% 3ER+)10+ % ɍph8pY dLL޽pCz#WZ .45"dZ(ʵ9<GY'.Zpm[_L2ͽz.A.{"u\${Hp8W9~-YվX ۅ1b;0}yvNO+_ yf;r&cǕWzDOzs0;) k4q< 3- \}UWvEǸ/"#KbҹVpb#Vk]JzX/^fI/A2$*yå`^bZ@dz4‚SV7c{jV{av*&T>Ni燢`[V6T=͈sU|^f>rӉqWއS_L>DuGZ'ޤt/lwӲ])3Y͈;KmTs}s$}gPw?&%OLΎ)BwS'=5N|4jʜL1ǔ* n!?sRz=&9Ϋ< ɯ}ҴRJ53kzG3U7mHw)yp|ܥ4%;aD}N]裲s&'m[Mq9K&-bSTD4*BFZ4lX E֘4cSS;R+*m Th`sl)]t&7I -÷F9T (;'پ:&x-):i7kj0%@x ~lSjn&E{.h!.ʹ0[lAkn+B[c } r 8Ï)_wm^ ߁a uIj }2[h^N:VYcچ/-6}09llьYr.hklñOP-L"/izH58E 0n[D-)y8+mlL-qjǦ>UɃSctv:Vyw~gy;4w(;3JeP9YK8G!|uD$1X> NTy]H-:L'i0vF/3+wZ g1ψ_|a-)'1är# +,fF:6d>Ց{Cyr|biRtROkKy$"kY"M;m2.69E]$Fh]\.o"DkAtkM)S~;̙޽&W(Vv͔ 1Aw]}t'_'77 Ө c9 qJrwu!僐R$+ xH S@my0l Aǀv!g/Z(% NQǞ; (mV@8zF'PC J|Z!RBeZ R H&!**ŝ x)S=nYd>m. IAyԭ+Ru*-uUW.L^JE40,ƙT *GmFM:xDX BIB|HmrإZKJ1q6KZGR?9RSLjs$ā$„H^"G Ty 7[`N0Sɢн0ùZzIbytz11gjT]y:^FJj?M$+)~$8[p1N"ɔ& 7 b" (( |})/GȄhR1kA| ' 1m4֩IVyd\FMa?g!5Zn 6*"~)mO9 ϱřp\p)!,|$uKSY %lt!{Q~_޸ėbh{^q` r\qUi*JnAS}pn`֟t\E,0`ϟ3HfD >AYz&v:!?-x&~DErw08fo __vlfӢgn##}ub |?%dq~ۛWWE+U _\BO^`Az׋[`YH>}x.Z/-;2̱uCkrBJ!âGqU!aN{n<퍲d76Wוf x>#RۢQ8u)DuI>5}H>|v ٌ@ "_FG"Tރ6Lzrތn!LKz|c[qբtq+Cfg+GHEW?z7`Sw+{i-o6 d܁ae„TB&K,n*3Iuf5G@ƿ'2)},~u#Qɺ݀q/Q=;b*뒩k-$q </lZJXmect)]I5˩ jsg]Ye[⍴A8mף-a4h Ƽ/& oChO>T%m .N^@Ѿ<Ը8 fťhWQDz*˂\odC^xph@9LRhra4,ϩU4G8̄,47K11aC(S3bcYZ H+pvGyxijG/Ry*ᜬ7wcO'AxU)eMJԄG  /ʼ_I>pr]i׃;X֎Wk5U1Ptڽ o6X >0X.- jҀ!8[ &^S0αLn!547nˍZ ~DOwX8ABoQ?ap>׳WA'| C0N=h5?NH ohiAQCdmSK,*[Ϣȹ@S90Q3g$Dֳz P-"WW[q5 `B;y J\$>٧ p[]BZb׀oD~yf<l9RBoY!!WL@6 / h \S\F{!?{͎rJ̧yR$ !gՁ4m K ?ÝZҷҷ;^MHB=5a W3'3P+cE+uYTV@SRXK9JNH0\$;]PbgPAXNGZ{: B4Y اPC,B}'Vo&i-B_ɧ/`7lYLyZy iQU=jG)mc] ' -Y@k艽iTUJBsm hƩ%r629[Q~ڗmWT"y|nE5IփTs5zt'6wv? a6(.ڮ5f/spv(>nҬbJ4,8dc-.ĚK[3ʡfpn73Q=c[J=ZSRɂ\|T`PrV̬MtF1""R$9eJa%Xxf,S-5倷{ BRMoW~*|@ y\]w~8;KX ~έ%8SFrt ǒZE8J c5O؆Z$Mz^Mos;`Wt1Nd[GL{TkI,4NHD쵄S}2UW~LEp& RcwS*9r\[G%s\;vZ&%af=Hy6f4ILCGx 7`ypԩ 3j5 CT?WfbZZ&3gEf]S]8? <bʍNK_*qC%B2^O dDqY]BSlj1Y]fN pgZ{4>k IoFXC>0^HI>㠌 8r¡ZHz*jᅥ&'A78dZܥ߱yj^WAjH@!b TW6D1h.sUХ9^"0ՃS(4<x% #&wc?(#4&r}ڠ3G~FKJe6.M4PhQ*f{{)K/֌jx{+x enH"D;]v})nAD 'e&rɭ$ƫc.l}T(ЪD4f8K VAc.+㭪ݴEu 0H*(G)jN;5yl&uSR`n-hdp{?O# :\#D =^WR&t&jDp09zq-wRv.43J0̿d\p4j}Pem#IHӎ>LY\8cp4~1hSlzZq`Poˎd\'c!M\oGDH}+zŷi¯@ȿNsr2LsHp"d0Fjާеnj/c[e2e̒%ba!KٮuGcI7KM9Q`<4zDIQ<ۯ()m0bpR)~k/2yMn(.(B LG"PS{a(&~h'KWn<yL>dQ!]1 R~nݎn;k6lubnGRYRHzF{f"5؇ZcJTq9͜"ܜc`Ec$4|?ԂɃy!*`"m4InFDjk#/h]řD Z^/zniXnL }so:7;/!ԣ?5j:lW6.J'݄LbY\:TQlY>kwMFeUCjIʲ|wFa쩬{&l+nIlAlY]L8#43C-\N惸g.Pw*}&]2iXm.&L+e$]ٚzok% 3fc3a!cmM+˞ _0٨k; 8p1V;3.ZEtÜXavUjGQH f"7Rrv( ۆŕ+yb}|DaZ_6J$p"3"f|!D283׻O_ f펜 H[ XC,=<[ L2 D6H_3BZ+ ؖ\'=V-(q3(9a]4bZ)MCXHH8{O] &s㛣8oRuAfՒXl-8KJԥ2p}"Nn$1:c!հm2٨{;4Z;q̋zg0FSw-_HN>UDV}<FI{md9ᑤ Lv5k`G5/W0N'/_HL?|\Pa ]kekɔ-sF )919:[+Y1cHO_%HҮJq%a[k{"%+^TF/suKۏZ$["&n⣃Cp#G%蚩B-}:ޯ~yADmN ,^|w{[`1]aJi[4 R;^dP% ݞ wr[̼f;Va.~m6bbXSY/1ZD|IzA^!qs1t<*uh={ځ'%0 h1õHm^gsC7*D$:ޓȭ,4l鎌Z KAr((E1PiBqw ]'hB,X-2ksq0smQc2 1RJ "4ˠ9fkhqlVcB,t}ϔyCI yT$@B sWqfJ8۶tv^v6`Hf%~~C1MTh'@ dZb0ωh潅 \%-V)cURrOaMyʲԨ2QIX3 RNHeE!;^F3! 8RI`N;ΎMg\ہscڿxi0D΂ezv"To[q2 Ҍ*Ie. dDpݺ|߆Ё}방0n2 ㎉r̀A%KO cW&9Ș̌RNu\`>_4%;xIK~dF Pv_H? ? )qr!_sȻIw j~H&qtñNHw1˩Q:ppF1o1 4GyHtt4\w}V?=kdh+|rf?(K!lK :[ m89pg܂~yjF v2i"ía.T?Tҳl]c=*7)4V zy)ZnITXGcB?|ހmjI"i[P*H]un˧~]Oc$ 2թw`7~HMANYϛmNْ'vF9n>XOT/~/nY#(N?@c٥$J57n-VkWp,,wA.sN[H7{4Kwl ( Ni@pL#e"?->A.$kq,EU/I K0S6jФɴ1X:SiWw\t1;ܕ# |5sVl['gۏ'm|p" F eQ9 s̡bJp[gKv`\'{^{v^ nU\z,#:OHzϑ5fg#k=l!H5dž /^&νv"v|6q R>Oy8\ }';u'<_25{X AW|SnO cB2vc#]xdF,8ß_'X1P?NvDjDOx>`fy`:_|q O,'y gg0/ dZUykl6^YdwH 0r@i{&JL$2QcoyàFaC z0&՘>r7HGg%PVA22$|ʼntn)bPjɸO=DVVڸd K R64TèB"ܙ&RFЍF43(2D 8Vy(q3˘3_5=z> l?ço|i:MyU" #ZTF[11aY {lm I ^^TYZ)q׭LJ;FYptlPfBk:_}ӣ]]zY dFUA3C0I}%bKnlaXkl]4>@#T@nkz냈DbT3;Ɔ_ nodZ#Ⱦ_^*xȃnbYŲԮD4=qr9T n>AT@{rbWZsμN x)[ p0g4>̈ wcU$gOa81IAo>OXa;Y0,wn*0!f()*4>f3^Rofᇜ]5r]W.FBٷaE@{PF+$@u6^٠XG"iK,* #pz}tRI%f6m"R(Fpг~@ꀷ^Y!J? ~!~eUyFa1h(`h<٢y-`&Z[Y^$sk1Syl}8P2vmX3&0'Z0F8 )P;bg:foB3P,#A5 9ꦂƂjJ CA"a8i(* aM=',z8Emو*X'60$$v݉k62a\F'2*L 8ƸC=%7 ˆe60&ТHX rETb#}y8{yȅy঺K/k}+jTPPf%~YC1P&cY,i ,gC g"7Ӗb]r0'k<`}D8%1=b se6 f45I 4iJmN׉Ey?&/T8[PL2fPIr&,sY ciB7U=x9vI޵X^6"`|yI˪ʆvn?=n~7!.'vNݏtrîIGDW5?ۊ~L,t`ϹFh-q-sB-#4qr.8}@$\ub3V?xyP:tLL}jhMq[8TꍩL'z-sqP9*0ܨ=] P@Ԛ54vP]lsk_uMz@w Tt~'%b ¬pV=+rWGd #1d0|(*UÐr6K9on_}@ QbMK6^\sFMh5o1(bg[jTORdRQu'N9=79=cz HXd}lISb+k@:} sj+ g=:WF~l9$SyhMp\ nVLfqKɤ[R=.ZXcC%ǻGcgwcɹy_2wʳ}z:)-rACNAХr7ռI+F].*Z|mԠI L0 PHHi5i) i^Nt!TY^P挅mE+Y?O^xMX5a5~&* zzb'3wv{X(?|Paư5Q^ {H844fw< bA?ř(I(.` Vef )_7Qu/熖䞹P8"C i$A]{CK!+Rto.X)(CC-) ,b5 А)mo#\94b fd(&~fu]#J^VA屖Ur CsL}("Њ7Ac]gI ,7! ^42) އ׈Mƻn19QsωNKg݇vh)jn@Aa2u/DGšR70f0L;'Y. CKNgM FOpaE^bTE^ZhԈ@J8zJ <-Y2EߗqN<\5ĞWX7QAetQ`l#\DYڄ ^@jiVf߶~gD`ٚm;̤|) V̭fCozRƄFe+{JcxsR5 zmJ~*\/w㵂lQV+tp^?ꦿO峞+ L&d\.AV_Jƿ̄xO,Siu:\_~d_?-g O4_=M= p; l+ /gHۃ?]"c舭oVMG5-U[%3uz_ ?yݒ}8J#_TGpXb6ſig+Ǔ+ADm͵QU> Q6_!,Y }L,}.E|DgZǭ_Q*sUY$=** mٲضS9˔DJ(Q2%W/F83i@bTGG$YcqH+88 S`4,B6[QcRb/ _V^yr~g/T;χF[>M>{hOiTfEP|Fm*kTF`]OFy24zKEU Ի=fD{mibX\t4a=V_!?:hKw+Z䐇]*A1x9iOSde۵%$)UtGP(jnh5IF'uYIn됒(J;Nu 'it[{dz :E0@YWu|fzc>|*>J:yQl{N3Ӧ>xH[ H͹:Š`o!w$3TMZjX[k:(5<?\Q1@Uh}GU,v &9lO` `M~ $YXYj(6o^>*˽W<.m on +঳0 =Glv'p4BͿ_S?`=OB^1t߿!ycⅎyqUɰ֞?Oʎ)< y{__'`T_fs>Fp}_.G> F>Ծs֛toE> mLt2jnWI)IM 1x aUSU 5 'n9}I56-Slc'”$>V3)ubS$ ( 7bRŔ9&T1q 2I0&POv胩1ABY(!˓K^A_]J#;2.E@0>u%>I #: ԃc<(L4iZT%t1#•[.AV(-؄ĊDJGGJ/Iަ/EbBVAFiyTc7x!8 4k1m{3cC=Sb̾+-ƐEӏ Db{ebit4GN3 g=<*J^)KQWQFxX/zMbԛQEXAQ̟p%{lvv?M}ȼ7PB@cѫCnM#88cZk(q)E\c7RPcIS?Q5g[!dI͍FYk7[)jnΈ rc‘e\!Ke&"צvLV QT1vu!.fi(oӝ%(8#IR 19yF&HWMZ*L4b: 4\a"X8(B N Jm%~(*Tzt(UQ_C?lPmu C',6ʑ1VӌQ*[5Άwn<2Q#^}rTиm-U;&IAJ8Dp5*K8MPWZhV;lpǹ*!E#p!|R@+ԉnYzuDPܲ\PjQ()6FmvYd!c<TE*}jSګ lnZ<iQVX٥LEZשg8!ʕKqԡjɼQ\I'sAփ< a8-z#H?SDo&]m !xx`'N%LJ"IyUTZ 3 ;bTO׈Rmk=ڃk--( yDI-C Ak©Yoccwv-ʭ*ڞb9#ghBR浄1+>qJ8ι7%G#1ׅj'f@Is1*ZoMTK3v8y.RzbK(F4VFl]39.޿ƐH&R3Dso(k+|bZ];46UXSi\ KrحE|Qa~:8C1%K,kD*?^nG*a1VM+o- IqD5N=C[ &D!Ҕ،&?Z} zx=3I1#J),k;5Ksx0^rT#1m"ViJ!Z2a ^o&b]~F6XQQe^]1Zr)툝|g6v5ij!4~~! U!E<=P3Pq3f={ubdB?D_Ő Noi mS_vЊO @-j}KERaTC1m[|"knK}IZίb'QM?B#ű`Ǒ>TvB}Sb2ILQVNTZeWf{7`,H`YAuw:c}25?FVvE/o5 c*ht+߰d9~]`D<6\6>3*1X ( `~я+)\T9 ճs1O18wz1e M,+%b4@S_C:Sc] -}wik3dQluVx5WzzoI:>K294#T*p)?D nCJL x,7/nz`$q.!+))ܥnD6Vl?O.\v<Á+&oXsC:|~M5ôͺKzHuAÍ"wuϘSsZz8;߆C4+6,yH'cvҬZJjٚz' m&fu8DY0(#t* A+=.k:nrEFu%yӔk8LL))i*PUXIy,V.1$vEW4I#+;YY_cGVdl/SỵVE+gTШlŚΡ9Ӌ z4X s ǂ7 YB 0P, BGC3=!aA †xhGJyc@KɐOF<`v*[hTtY 򑍴۰zftp7 W>#Ivƀb]b=ٓz31w Z['%!>~Z~, 5~9p'STrJJ`;_n,*egBJNۃ&tAsJ(~LGtՖMϭmOA?F(\%(BR~>k*S ) 7T#Z'z\fhΝ 0)*twWnTmVyY=X3nBsd<՘qbyWи>ΡWиi&dJY- r$Lp`yk DbלwûGJ/IަkO3ZۦՏC# >የQﬠrvuT+sz7HbXLw/)̮M;[2?H֨qYڰvqx5no0fVDw@;t9RkCMk<Ә4h>|MjnGP+ht$*%mg m,߀0whTvk*㖲~euvln6.W Kf!EQYawCX)+ULIy2;6 @Cnn3ѫtn{)OlyqUAOK6gγO2DCmpd@g##3QvF/" ׉AZ|G92>f x1ͶcA"np:`3roEk9l8Oj21Zoi޺ĚSQvS9ΘM  f2h{XQ„JY1y!f!QFxV|\1ʝMbaȱ:c񗨂 .VͦA'w%a8ڸRŻ'sF,41P(=cPlDj\AcvVj<*G7zZO(fUDKYQmzb)^oͩdX:r|7{ۋwa%G r}, }d5;T#Fe2WиYcBd8okHj¢WX!1g,rTEFTUrIc;꼯ع .<@޼%%nxSCB#qsg-N5: C8vVmq=(lmE6oI/A ր;إ ?/`&O$hƨSBU<&PӣϳT3OIPe  %(є*L %wOfJ%J =ɯrM TzJo0mGv${;:|E=ݡǯbK+ j%”Z9/I>A* <(TDٽqG7,$ϟ:+ؒLJАI(VNZVKm )o( 6/ttv8ܷ/5;=?hĜG/t~egidJ:yU {4dYcT~/?dJ%_S8ioso~=__Gmonݸ[u w6}mm|uZduRɗg?}z2=? z4 X*d^~Bu9͍t4EM[13((ĴT1yK~€S%r*.-T?f₷qTk/H BN-.IZ⚷ԉcNddZbبQZ (r!DΛ\ɬhkiJQ +htoa@loC*bt(_ 3.̮gEP}VnagZkFujT ӹ7 ()mUId.598t &m/lĥRQc;Jۜ5H =ܚe2ou 8IՒ~1Q<Fkq0j[Bw5ƦF9|+h\Q[BFk]]A kFnjcg_H+Jꮚթ F? ږ^> (WA PS ߡ"02@N6TԤ5`8e@WVֹiEOsD E2Nnm/oO5BJZ"F ڏh7F0i".Eem3N&1}n0ȎD[hd [C޿Mk I,$FD2PHI)<[4VUezF_ NW\K`14 w@IG,9"mS,QiER)=Ėvܾ:X)|m-dsH3]p2ԌF3*{r4vl Y_ r^ƜWNc16r9ּBec PC##6]K1mĈ UpARz"LOEn5_bf/^%ǵrNR(fD>]6q1Gօ>kǬzɹ߸Fv$GWp{rJuUM6*0]w8Mkċxy/þ1EeJbn˶wW/*Kc֘˲ȭ_̊9zug%s}7‹+tlXq9 r򲀷֯1g#;}fc808Oޡ Xy)_xJ{xڴp3$$ FPvu:uFQ^uD=|`â-ӸRbˮe&D$L+ΙQldJ(E'O_ 嵢ӈ$j`wkϫlY$}[&xw} -ڗDI$"Q"s7>f` K-lz^mV$RhJ2(Jc BVш#wFQ+@2M:ӥ%ם NFFi%2Q`b4'-^4͖Ry$8m3BHݲ5dwi/ȎZ kqTP1Rê1*ݚ Vd!UVXRv[,X/g}\Ĝ|xQi9u4͈uuɁBq/釐Auؐ'{Y)ϞiEj-u|,.gKOϞbCsQXȯW164 bWTg@ͽ },n[5 Bw G6~dZIa{ҙThM;NQ{k[P3Dc^4{Mgo6Yp6`rm>Oӛ}|׊9JϲW}UzY\r}[;哂Y1ǐYMpA O4ApRss\D9<ū_w5׽^*ɻ5$< FWm$+% j@YjgslڗOXurwsZTG,IB~+:*5 8y8Q:|'E ]J BLcua ܁٧=G_cqw9՝̂.d"?F Xvqn3@ntgFOFԥ|a;Vc$!?c3׏&3\V?F}G  ?8Gbg/Uhlhx_/ڗ{Չ|?YQeˎ)]<18fGQcop928i4OGxTh0_~^6^q,L6n ۷0i?? C+ubKKW:>ɛMEVXtv%6=彔OEuE$§GD{+Ybw/tc0R#80DT ehk򻍂n/7HIE(2ǔRiQJOE^s:\QBϭjWʪ_kKqvМ]d%/4fmWڪէf%/:y `H`0m`Z]5G 1Xg9v)mTC1s ጷ^OqșwRY!Yi8COAuQ3; M4 4רmGU("qҿfe 808E_'.*?L{$@Cwd>BC#[nn$M=ȶaJ}}2Y!ta] QuȠ֭ѡye@yzgnPBR`غ ".-qJfHfkjegAJuW%We()#lmKTU,HmEgaJ۾eK^>ق1N,}.ܵ5j$i`ElV;Aa? ҠjGeE3IjY˗}G ^^3ؾ4)Sno]F4om)uHJ Fhfe>>3cVHG&4 )cl}cBcjMlwiJMބׇ+M,O+N/Y hWžs P g.$)UIQ)UAi"eqI KFdXr3*6B s0Z)i0K9T݇@8aRK-P zP)JLX?="rMiξ4qG`(o7 4zdTNН*M(fGɶ@m็ fA[p_ Oe%yEA};QnM uo1\ol3[(>4ZWi@6qЦMz"BJ6 hS|>t, oW6ر2~ЦmzE L>8H!hcz$_.fYcN0& ػ}/x-q&ۓJݎQ-N& "+)?’GrnɗS7yF3ABQD )zS +Pц R$n:"]d%Y\%a蓢 r1)v3GDQ)M CA׎* ݿ!3E2}ƿ7\,cb!5OfP:,CI|=P((i G?YvvDh<̀_;\3GrWz'^{{2'/q>YY΍/ӂW``ag78`*7(Wm0W3XɪUh$r_跿˯?^7XՍalsMQJ0)9uk/F0K=g9(p #r.$Lx6x8p Sc/k_["I8jѧƗnS#M=LV]R~5Aȕ=)R;Q0;w&Cc5s DijԮe%#Bk&onJZ\m/ɛc}5J2֡|o5GbgP\Q(! #+o7 dtoBT,(2>A\fE3+w溮f9a #7=OSf:{ׅ~ϻȸԌnWq\-__Zy9ٿb_%$G[d,"HXja#.@^m &BH$3xc=p`}h@W TYnXʖ.$ӫhx?DwB*2Fad(=jׂGkg 6yLZ;k((hgk$1`N*z{m,y&8)8ds8 R _ TWl+Vx;)a$1^\\e?.&RmupzŢJD֮o$3 ˠ݊YT\p[<ߊU|iҫr ++qj{KsKv lx=Q92ri% رrzdFbA彎}ˍO'4^. 8yIz0gk,k%鉻3z;3){ z0'01Iĥf(dE>'D3}WچH(w:,U:Ѐqpmޚ3TzxJklDvmXη4V!EXpfR [É !:J\>wP.d$}a5df(J6-A 0VȊ ɋ+)&-Bs3wjk/k81_sn'.ߍG-p-f NC&6%Trw)e$Xی E/)H|f"djOS-y(L}NlА:`Zxqۡ!=\b tð/ݱf;A>_1ł 1$Au)(8E0ʼn0!)Z!~-{Xz TI%r)D8TK`_ַǧ)K ,z]2a,0.+T0CXb%SC/KKJSdXb%^}QmTڑ+|ErRRs#<1`B=M<pڏϯa.Oӟޏ\xW0vsӇ8`I Au%(TDT0f+a4g!MB'1LBđNL">Wm`)1X56>i,S( ?+6x9XNWj?~t~>O|9Oc8>Y?n|lrx,,) u(5O`Z2JcZ-E:6|V0)nmٶBrT1~=Lf3aWܢiz !Pk}CRG~={FOuI,|4;d; /Bm{zk/Uo9[Ц.wgS/iG_:E2x@% noG^"{/x|SA!K5(-]&%F^X Np.9KJЀǘ|bӅ4Z,'gbqizݟH:j JWRx9z# HpߵNB(ਃsdS0Cuw+-E M_vĨۣ-?W.4|u}F=MTF^M;tsqM\DA*J *cXϷ;e|g8fJe^W^oɐsq蛒|>5#M;P鵝QldSiե* t/>y q+MHPr0+%c<;+e? fEWe_o3}̲ߡ(E'O_W38HZF+*r]|;VwɳAE44 A[+ W[bV:.ӕ́'OY?{n_;jp9`(4_p 5opYfQ67Oeq$'paO/a\l˳4ʛyJ]9^‹zHcweH~a۽t/aH^d[%G`eNIrSY(ʰ #`_D8ωu:d!"ǵ7<*X!^n\1#7nݷ {ƻ..'WɧbmO`ȕ.Dt-BP uԺÂ:C 3 ן\B1™7b@*&)7z@p:QO)E[Jq B}@P} ևl[8`C ;vVA/u؄w #L7$oʫNMW }.vns( šlB6W0Z uÃWV8]kA1xe+C1> +cT<'@ 6xeWCdZ`j%W$z;ڢU‹ʲÜ/ 6b\vpމ=ǫd 6BЬV@AJ8:~t Cy٪yQ xի>iE#U"rԯA˫F\ޟV2ޏ;nf{)*%>ˇt}aۚ7^v'˕`g aؐ01*b=6U[U xVAVu g7 n'rF{ߓm\^xn;۱1r\DkIGaB6>Y-#6ln+iBZmJխ9erp(ԔvbְK-( 8SGf :oWzM{>542tAt5k.B!=*/' θ,ήQ #4@%K$‏ Zjj-S@^FP`}kY.VGDp=" @AJb|1&&!54V"5i4]5W^9٬$&t+UGlxh9}Ǿ~x $:iVe3=3FIjh*?RV<8'sskiV DhԈhYi(e9 }@]]8x hFD .7$V-'54{9yo_&كr*)?. mU+)JHw¯y gp))G|L$J ۤso@ғo7_Mʁ*X&0IpSw 3`4` ~W7jg!_s[ *i_*;/vw /?V T67zw=i5DVcOJŮ'%4r=tT n"M&+[gGu⹤=oxp:y(YC67rGWt/>|N ɧ&]ax_KId54Kۈro|Nr"Be N}EU󠚿oGUew *ɦ Q?e՛8+ ܢmS ,O1)*0|8<Ŗ"2i#GCY")b$ j1ux=: Xy)h@Zcj]!p(z T C)ECp7bd-FQ.bo򧏉+fľOm!p}=`mY5L:56l8>=Gjh|`¦-Sw1|N[G):;P42#%*H +pKVMmJ\N$&d (kxl.;~7XK-tsw]`n^7K(Ppt)ĸ H4F&543lX\I&r?~/ϛЮ[dFA D<U6sSDq[+e,@)JjhԔ6)o, t :E w@)/Vd滬Di:Gq\4Vncݳ/ۮ7oo_bh  CCұaJaH\clQazԀ6MS{M[ vh ITf!)RC16sN:7aKBgzjN5:8m-ø,E;&RCڈHswc^se{v9i9B*"ɂ%K:\=qi="/-:S emEL zf#WlG!4^ӫ8  g/wmBLgԄfI剄>VF[K_E80"`)JFe2֕h\RAVNu`5V6mOE\xy.).n}ʅ/MȔT0tpV6].GB.d$ROdBgԶB_(!3T1&gx,3ւ vX.Ҷ ˽zz\QS9u3Եʶ6-XJs釜Ng\ 皂;|h] /AITDRQAғ_峽Cml$*{ M6\E)tHv YЃg E%{[1p`BjRp!ls& Jmѽ_M_3zT'c4μԀvI@]o?/GN$JZ;M"꫚?}Ɨ׏ǧDZPL\$4$MvL"7S 3< y,&r҃T2hihLqEKE|2.ĭMD%0 ~ C~:l,HG5 ?.!&7؛%)«7}H:!# HSzed6K`q/WSwSwSwSw/O]~^Λ,a Ih]ko\7+BٝX$$;`.bYvd1}[-jWݲI,b9b_%גbaJsP:T{#'xl~Z} ׎*po@{?-r6HR7e.F2bƼQ=7wWt ΞtF$1S7g-X)NILt>ĉC%} da -?O]k%ӳ CDvΓR1n s,y|mW;`1a_V]+6Zwb볎G4n>)[;%mPQ")JW#cMZێV.J߲[=C?x[&pФys;nkuCzweͽ,b!`xBٸH716 : }nN?}Xm;d/|y@%xg[fZqL;~/=v 85%}xxD{kO2{! i0A+ S2xV]G鱿dDȞA.dE՞ @g}3o9R+3Xriprxi&P]]}@]GO\*--˸݋4 epT-ݼXO_ow3,f4'GhiLpDoL2fV_h|=}AMjS!W}jjI= yLFj4Jj 3h#LU#-ϣ >Jļ;"C/+4IGsna&"4,ODSRb#MJ,CoFƫjwt䂳l?#}DK+›ݝ{r%Iz6>{8ooSVNwZb3֌1ok KTo "i1Rܠxh׾jvJjuػR0Y04tZ|p6 ~ \M^U)")+Éj %%!;vQWsppd{yrՇ:LU x'6ndT*ed44]@ɻ Ay#+gsʅ`\.:X;4J;>B;dFQpv㡊pFI$x˸zyI3&ІM24%Z<h Ga-)@Ex@qd>8bG)q]`_TtMZHY҆gI)'pZ_ejޫ*pu`pq|~k9w[)F苫9ZN82&(+/lD9J_[>-)9אD ;)ࣼө4D\ Zd͊c 7sʱZ\rMy?F"TI:zd؊ YL"aC 1Ã[[l hdŦ[>wzCV$Ԃr`X+֡ dIˍ2(տ?(2:oj<3x7ϧ{kyR_z(謶P ^+7ޗFWKy-1"]R^/K*0{u"#>?`؎Z׿}9l&Zme1Hn:]s3,fNٻuG,?WЂhjzyZuU ۬b $fF P 9(kI%~8X)L?&@EKJ1i9ma,}d0k $S6A%8[ &U ?dIc$y}J3QZV_XO2kl*0p`BumEg(fY+,n%k/<73_K@;Hشڲ3:SQG*RR1*(==y ;m%~%$fla9iͭ\e\n>Ofic [vQ-"tk%zv_BXd[pӳޮV%W1"7@P,0t"fŹ8Aehe"rf@^@1k^ZB$A28ޢl/Evjh&(O9 mIjx}`>GY`RN\HjES R4_dVLa ,aͭ%ÕR%2W)N497l硟ZR,lCIÎ"JE{TEaDRyb>Oh\?_"t$VVh}@;0BݜTȾˬ B70k0C B+[aׇ Lnv !!k7C=7:O+?fvs@aW~1ns~]Q1ucɺulݯ\cn]x[30.xrOhYܚu?>K1׺c֡9J2_!LwкO،蹻1G|s:XH9?a:\iJ[>1{-`jWJFX0|pjLv: ɑFs@,( |t Ssp\6<2NaG|~]qNp4wW/vH|)_Gw4ȭfd*Dit OMؠti&fU>ȝ:1,hn\-㘕V Hi_{5mIq!蔸MvsYJTi39hiR s7`x\fΤ<nGkj2ѬAEgl~Zid3AU-N"Z SS5ow[M'$ByBV?Z`sb&S+gs 7͛1)H7Zs)g04ɔ$c1PO'57lM_nn ]]4j278g6OF0MGgT0!/0Zԋ~~z B,ĵdl`=EڍFW4͢ev樃Gc!ԓN`2K͢l-ZЮTg{JgSoK͉ٵɚbpuyIƗ BӋzFBV3hmOy@ċ!EqLpG1gٍc< &yMdq0'$+n$,nz݈5w4p1\LkdȲUJ=#-$bY 1{$bcV4YZ9/P%1˫"򣯀w6.}zbz$}mZ,be-N x1ܘMW[R4ƴ)3DIe%PH$t(4E ՚]Ttp(* ϾwF1`#1T/n.S壷Ā79i}w (Н0n]/LILFW.!sޔ2=+M9ڸH'aARS$cKƊs,sBr fS_`^_Xש~.n%4KȴgJC2HSBI|ȵOMs*Nт09 ǠR!t,&Үԓ?Hȼo띘 'EۓQ3GIBۓοI|z2O,xѡ$!v;O4ֈEIz옪ޝ\秚  79d2k.`o,PaT 9%@5"]!7|! )4Cԍe@mލgzEhP@5ʶoO8CYp3yO8O_釰57K.WAO,-D6fx^PBڨưȸi䭆"b1DK%WTcyK!ĵZ_yi@C;3yؔdVuz*nF_!j=Ȕk/Dk"`Fj rŅR4!@ MA 44.qc%4Bͅfݔ&AL͔wf_Ik<Gs1O-,*1EOsG0hM)9/)f._݅kʇf&ҚOv*M صit `vjIj8P1  3 Ms>7gX9:ThߵVcƧm1S9` #zV8_"e&2PLÒ1ؚ'{c0į9<] &2gJ'sMJX nah_>waMl{"t JvqiuL=+"HNjPk >-BIܪG2xlX)$ϮR}r.dT թ{Ό?4hHl#i G⩕r)kWHK3J-*uƹ QJ&l͢b_"TGyωC}XBhLH3!QOcF5Mpogk rTy^Υi`'gR>Nu 5:5Yb-]rL:ZzBʵ\bq!;#4%>Mq-)j@SK>К GNTHU0z-r.IY^'Aȇ]Lv|7<gwjnXAFpg^NyIBxǏ{vJT<y%cgX0A;v 1. QZ!:gDRP4Ze\3Yz(r=[) Px>aDsĀ%s];od"DlUJ±kˏ7l5qvΣtGh+5ٓk}yK,J#fhd37Vr^p?v;Ž^nfw{ƣ-zqhQX)Vrsn M~v9n >]qAΉ߽jWoݛ"b5[Y1Vw5qܺ}1 [T]dw+ ~Fdɖ|FhԒfg[jWR5n@s܆PSЇ"J6AٻPlB(c5[{Q]7JYS1HV.as` qqXQ|\i 8KĈyz-Wp%xu{xx}68?'\6*l[L_)>'\-}(=t؁!낝P6zD=mpW-|K{"#&P>yWDޞ'쳐 w}T/[j8t)P6ʐyڦ )f3 Q,qej TYߔm *$܌6JC9jdۓNPr՚\`CdK ő=蒋%Z{eKq'Z rSdRm$Efg >"敤*%6br&v{N@D)Qi;oڼI y!Pg~5I3#s/M(>OvE}C8sjkeB@ЄF]F]倛+%c )I} ScY'J|DmU;( @vm<ӝ?V0`P P^09 Z̞+o26Z#E^c~辦geU_Mrv*h ^r~-DaȦ~#Ŷ^wZ&QeĀO1$g;'RWWz<6x5Wl`AQbʝ>SYZ <aƒDuw:|1up:; @فoG}aOq:h7剬9ȭ Gnw7ւL+FnesـɱyKlsȲ_1iy.z0=}_D>SZv,t:`LcTLJ֟!Pp żg1ovv8i ƽ%j':030S=}?ќAߘ'쓠s.}_xz:K8N?ލ;@t~Zߟeq|ul>tdYߠwdsAwNϷJ/Yw>[u|w!q<=[e9˜䕗\LqiG ף^M}鷚{SoXQbR*&&wR" zt~ ˬS}۵nP2aOtA%mg8MDT Fw{{Z'O%N^1_Y*mT TdΊO7fsy]kK{Z <4!U'>˞V.EAykb/CϗwyAaC(gQ9*:{iYGAOz/+nuQj9U/2d=ᗣx9\E|q{ˋ2m?G:٦͂lT4z\?f:pqoD3qk:Y 1TֺJ|<3Y5%F2\?د,c|}GM7+o-RlWC璾0*6IavIc`:-ҋ8 v$p赽|lv7a]ou)rFޒĶ_&hgivneFݖ~\&c(|g^X1 \0f?dQ-Y?+@kht=IuHF[gR*c5ʍz'I)l`ޕ~=4|:O.ժ|sJ>ІN㚀 Vc|KCJ7$?!-M|6uj5S%'1 Zr\D$wY=;=n5OmO1tY^x,:VPhhWw&c.P84)AUE{IXrQ5N 8ѪӇl: [?1qTb\0Y$D{j$4/LrB{{y~Z/"2 3^Hb$J AjYbVW'I!?`1!d2mTmެ_k;zY#Zr܆ Z/M(:e=Fo'hy>=+ov8{%;t_j^YpyDp1U=luz$Z- AN 3RR;B։jbU#RԢO wϠ]Z)I;Rϥ+TJ'-PٹTʫϩ`4Q]6l*&[W~bTn8Q-R>[J7l{ O,99<g4ej:xF~7ppt.ۚ*PLf;$jUel7NZHنRIY%(6ܒp&[̔*$4ԕ]rWlv/l|gMŲ.R1!ݺ1-g|glY֎s3>Y.fpt=B$sfH ,PAZ?Toq-^4J,-7EaMXeyR>bTS_%%xms&HzqYωJloo\w zu{M,e>}$`:cT5A.F׷A=*-pńV2~km14mAU.uBb,8:sKғEBgϦ*U]60B%14 t(Z%Wo[,E,կ3WrZ)7D %Y\i%Qbb-6qύ $jsҼip`j%da&ϟĨ=`Pr?  (l ]\lj@RL)$)ЁzB  [|ՙCSq9@ ѐoGz:x&8`nnTl}M1Q8CF;BYdp:Pg Mo:A){,AFӅ'nۼb#}B!4=MP3{DžC*>2&ГZlzL^iތ/2F#Z@v 4U=GTVpl!yNhx>YPvb 5B/^Y:nsfr%ѩ-74KB.x2S-R)H*_pDgˢCF9mj'N?Ku^D6JqY?%X~|L2 ZdR L_:][? Tug.C7k5C(߬6tU>\,ђk:*ɃJbf#ջgJ6_a˻[HsTͦv;Y Hä+v aiRNbQg2չgk5&z5beK-FX8L0/FY&τ!_2٧ZFg[4Y6wT"G[r-w TBM<}'V7XZcIzPh6d͜ F%9i!]x* g:] D淅9%p 5vP}BZGKZZy:E3jsvBȚ95Z0!9ufe&臣~4$AiNdb= scQ"76P,N3fAejf(u3DYPcN\2ˈf6yhe1$,$'΂ەA6_KZf+wK2dbXLlCA^$ !gLw'Y4:ƈT@4q*;MLuYக=7|(-T)Ku֤9FZacs6ϥVg`JE2+;$h-W|Rt> RXR6/5M'Rq\72pJ㙞yn!kzRA)42 ,fj5H[+kThp.Olr_NR-=ynӂ̕ve#s9"O^ &%<__z<^82h6/@5DbA$X,)ҵ -034 {PN ST.p ͵ krh󌝔k],'m8Az^;.+|ʨܺÇ娲w8P`$e"RuAreز9Kw8v4X ,%/"rK*mlJQ9IkyGtїMyJc'1y|fxbQƻ㏦h\2"wGO Ps.Er3bxUo`Vgogl"!ekea.xgm+\1O'ew]G' t)%n![JE0憓 v ڪD_Wrd ޛltr6*r[.mqt_=)7Lx(<wnG$[ʹ{bk*sVvm}\w<}uw0m%&fk(ekuQj<[3D5"6t MlRfN$%o/ZEiN6 JA*4dw%p^㕦\MiJɚh0h[HJU,C5xfr1}(Eoo JTz1NtKl|DФ082pxswyHMF(FxM B pm%uNhJV|Iot0t`/ыZ apI?\Nm4efLrwQb~׷}@ydV]]ԆEL%\\iK _ހХQb9ަXKO\>MTE/qRoawX,Q&~yKS-)䶚 V&9cLg!k_ðavߗ 7qSƘfNrHec ܒƲQ]<6y~lDǺo.Bo7n7ڍ5%qj+-QꒇF-1pYqϪqo> +;yХ3<oUTX!vǞC2'V҇x=GS8ëɗɫF$Dcj#iDz(,'v#J/y1uB> m2Px0X_,N̄f*#BvGԱ\tϏQ%UD+=azWY11΢1_F]!TNPB*e,HTY0H`8·Ƃz% հ1ɌSv7|t$[A9&mGV1RHc[-űxTtvcZ#I׌V,R`Ym]Jc3%K{?Gƞ:7Nm?a\cr޹Xa Ê{<Vnx<,$*0ckq\~~ʕءE~ RrVB7H+]ǧ >b}(ѿ;؉cy->~ v l1S|tk8m\$ S"DN)?+E Wipܚ{jV<*5QN>B8ôU\Qxx8x5n= [.C _E JG 8U洃 Q2;o`MI4,/zDsWM+9UFwT`ɩ:Q;HR.t2K[dJO iЉ9o62)2aP&pJqeԝA6ZFS֚8()I,9B浀;Ca"H) b=1U켉71P"vwxcA/ǿOM^>F+,% 'W89INXi|&#!uQmR\A 3;O ,OvA+F :3W=>>./'}O/OΛIw'e'/>n-7}>ZVZCp\:[Bsz+X J9Lj6VS %M+'7| SnUoى. ;N U8Bp$RX $ J3R#m;QRٓOaC=Zf@#YZKԟm\]=Q΀b64u hMUHA45"u>ϕ"ICjm3is2زY!zbU8n *Bir ap0:s7deI:~B5H+PFt"R M9Ky¼zM,_9I}t#hNKa?=?;#jg! CfT.˄ O=f3"Y65`8)X Bc2B.!wLuT^2fj d7zcSJ.5ʠ 1Q)*}!CugZحS(Po?\ n\fجStc./pTڊ YMXMN8%;i憕_?NRR)/N({q-eP?(o"*V})<1_O+3SΥ۩(QŽ$oeJpŶLDNmL&ݙAŶVz]Rktnqo<*rPVLuФrI D@ ge:XnG败A+36qthO ;vڗQ5tg.[˔5Rkϊa٭m q`5F\QFDO ?0;k) ·-PA_gCݥ=mD!$lk(;v*h8}N +VYx.i(8zbp>j]wt4[UiY5*՜`G;ߋ5#^w[yb1WғB4!VZRAӥ`+8Y>1zZ֍x9QȘɏI0C}/>/>/ ܰ8gi29@x? Y4Wg0)U6պ_\͛Bɟ.*bmT`a@S1mņs¾oeZZmy𢐀XlݺvP`Yw>-":_ߡ0@7֊I'g٠lx1Ni.4/O5w7F|odXx04XÉj0ڀft7>nqv6 .M<3Jۘ:/[E^=tIUXыofbn|V  +J,Σ )8&Zb]f! hͅ ݅ Յ Ɂ%6^ W2G?{ƭᧃ؋w@4g'E᠅!9v$;m [#kFݬ đB\7 Q]U`g"4UJZ`57ҷ/4HlEuV}'U5$dA*qQWxF U5U&/$oRS֪|P\BLm%H% n` (K "Nk̓SiC9\^Kr,C@OS8y )7}Aׅ4n_ 5(ӅJ)EZ=0c A r8`$BiQcu0hƩgI4&,i[(efݞl3e=沋S{xrQ}b1K_ɭ9yzżE$whQ=ےe{hx6n5[u Exf8ކ+ut;~r1>m^&R{]}2W7Z1}vW(C|;vA^Gw~l]}.okjnTkà8݆x< ;>:v#ֻ } 쳪~~拝N/_}B~WP8:6Ll)WL i]d-cVj5Lz:yTîoY)A+{ZW4y5i> lj+gUHΗuZ'wd6Cf}Ъ&i>*-^@~[v9swa @73.0WWOadg':]ݶ@TbwxqhEF {ʮ![DّYW`~V@ VBGe;80<[9zKR 6im*%l]^*§ܗ ۃez0jh _DZZFý.{7߽տAqU-=oSTO}WI2ݖ["t?Q:B+rZiR?y9+%mf"־DO3sg9PBz‰6+Aun~=ݝ@0dՒe!{n;*Ir?V=cuoZ-8Nb9n)0LE4_PeZ*S5nlȠ曪>:ؠ{V=Vmyюh0!a?d;A(/9k!i 72ϴ!X8-y[Rז\^rmnp 8i+KhU/9I2,T16̬ӼT&bKAro&shڱ&^O*)-x F-=6ߎWiB<83w-%" v r&^shdygIFdK!5mYͳ%Y־I՟IGi+e%Z@@v('}(9Ƀ~H71(e;f$֟J9'0%Ҡt}/zxPF~Fb ݽ8iA}1B3WctyIlNכ0[0ʫ>ٛssGlp "?,ӂL "?Te%xE#4vH ܫq;?@;~p?E5/ICA+J,=m=3~+>ƀ-GBQT}k}MCHr2b_B؀es_'!j3٦:+ N#ٕ*aUMpJV=yaLK\n*>I6;k1Y(ti(>ߧYrABf59VjK 'sOONx|y(ʟ~No5m﹞*(palgF>D|CvZeHm{J6<#T +J1vQEPEwEGͦq+@RKd'Ԯh (AAL>`̦ O_p" %/>F, u^(޲@?`a:jXGm먍bQZ@gEPRB4)Q\\Ϥh\<ĵ.a/o:xu)qTfΕK|*)ICg Z*as#PBN&gAc!&Ha:Eo573)0NI죏go cUsHB_Wm ~ D3A)L\n5aQ4H.'Pb vuf<43"[X. ˖86YYVQ@ _&Kp+h=\b()T`agݐ! 1ց9!d%܏oP|_ ?p ; Qq)tI Oki-9=NdkT @!XoEaF T3 ʱɼ}I&s/X%'y~<@N]: P ٝp ;6-y KY+>?GuBEZH Tm!h'AH {'9RPD@S֝jk{55ylu4֓Y.J0M6W!iuٯNW<@}j_ӫyĈma+[ZS:4iJv7uKK\aBb?Cg~QVՃX:5Oɍ^hpޜ86%1.^N#_,8Y)]@8[O-E>gRhεKxWw>av&|ϫKe\LR"ȫdخlyR͓׬ڼ΋0~e&[CԪءdPVorĈ}*Z@ N>ėnVm)X6B D5qkҀYZ2MB)eM=֟$H%/0 5gPFe_Vc#yKȞThCg5cG, m:QP =T!)7{fֱ֭!F$Qb"蔉4J n)Jߑu3_!DT&@W'uTCB "%>sҒMǜˆêO!1ߟCYumDHM,ş #C;ؐj='fڏ~xӇDv]~|,ga(#)fdT4/2B(D܏+G`*+ddm.,2,3Y2+3$d-0Qة6RhL:o!\Nv1\1a׫J\_jBחNR \?-鸫5xÏãΧx܁R^&ը&sM?DBDX FS>JDM9@M؇pQn;AmU5L4z8Vb&QI51iIIJ1KR1N(60C8ghX\N@ FROAn+^oV0eKKj{] Y+*N!vՕ$ny :X`"4;<1|ZðI_m٥9!'v;fOf*z ͚BG:skprߟ@&Pѭ :GRO{N8Et1mV}N|9bZ7*% ;'B}wpڢoKBЉ!g DCNzj3&U,.>{ljcqwUr lzEz~f D.1<7uU]P9GTl8NY$%딯ˆ[f܋NcR-It*3y2AJ$MƩ$Q`p,l(ڝRMj.s)3~^҂Lxѯ2$+iϫ8c`U MIpV,܀Vi`я3&KJ5;YSN4M1֨!Zs05FUۜK]GF%㬲D%qT!a5j=kc65ޡbR6.l/qg*BC_ήrNlH$HA^R[Ky ;u屮,7ެ {@PiYwN rd|Y -4*|񳝇D:EGcl1!~Q3kTEfš(Y8Sb' Qv'0@VTjjF?M% '0蠕Ju, T΢^ZSc`Q责i:рlMO:d~ChdaBe[i ~*F{I7Hv\eFFҒ62* ?r!O]<1r:lRR[703_Qۓ>Ƚ(-"$ Is̥D>vVt3 #i4F`K#”Ѥ,N$.%+Vi,)S#ЖͫLo n].T}Q kn!ez=QV~yx<;8Uxy][yB'8?xt7uw z ‹aa yx=s[D>J=zЛ.B>x L3uV*yMXd*X7/ (/_3ւc3+1puOrWg^7{I讶AS,dZ\ufsjg|pWN}~}ۻ?K ޸{r8Ba~;6Wr{yҽx+Uo9ٽOqʿzv~vQd|ׯ{ŠMĝgm|CwAWIvix8/׿`3|>s}=XO@?Ѣܫ^MK5;͒o&ՔlفG;pE]Kųo7ohܙw޺Oc1_C_Avo[ |arx'c )(^{`)p6#ߏI>@OP2Ww8}8vQ!W-3{#egsϮiu5LF۟( o&>aWFWqڷWSl g_}f`i|ry\b99~z|x1y1Ȗ,Ӓ<{+g̤x=_vf9c)} .Yr~7𿐌MT ITEZ3fݦϙ3^5pG y BXK$\̶heLsm~ɜro2Cي0뙲}>o,q m~[F sY^0∁)iG(`c)GiƘ(Ĩq"`V.a"ZD a90=h{>`ElnXFQ/p^@-!(".cle'Ʃ#K !j$N$)v̊Ԃi .b\EuDm.{+sqfFҍyK7*Z!(V +HkPh:i&*a0p Ӎ9$ I(%c:NXLtB" a4!GÄl8>>:JԖ *A0q0,Ɩ8d <j+ we C `E 3`ZJ6&F 7V8x.MC`!8bQԁǻ i (ܿ9W5Li5N1iktG۷owv{]^Ktdv&C rj;sp9O#p6\ *0Q|5'_|5o5['d\n5KDP"eb_s!,*t5Mga ̗ jDRl Eӧg넁E4\^|IVL2^uQc+n?SϬ+K+ԃN̼7|V4O=`>ye2] >u1!J!ԛY8݈ͅs!Hue\U"O!}7(pxǷEPfAlǴyeFg> 8Dԅ=;W܇\R psY-ue=Pt'R ¶fD j|D ##Ҫd#5;0$Gg=rA[M G7^ywNpީ5j.GQVHiibz- xKo\zQ*) ̂yqR"',PGC \lbZw?)gNLh'mR)rQi$8+ QjJWhY~oF' 5PZ;NU}jR3NU⼨#Ckp_䓚"@űDWn:IdcwxJeb+j(@eއT/lVrgd?GUw} c\ٺ^0.hAaVbU4A(}HP4HYs䲱>Ő2lJ1CV" bdp@Ytە& AbYۈޒSIhBHNjD sR6qÂ2kZ/d4N ۧq[G2+Y'qѺBu;¡Uj (ODIђ*: .֑ &MqTkC=FȨx5]"k`AC Y(@Pڭ8ʶL]Q3%Y8'RIdD+w:,Ԝ k^A@+zDB~?缇ci8LIQ&42r\d`<,ܗ5( hĀ0k-f=ث5uȼΩ|Kv*.jwqX~˗xr]}stJ씚\%g $HS#R;Q/g*=y$c>+SleR,igfE.RB }b慠2xLQViRJiVbϻ}EAP~w'k f7B46g%ًP}p~xӅ!*:@;atJj?*1(傞|FRƴ!v;AE&?Y~O1Ql2H曇“9yosMΟ?hSˍrccr9BYn ˔uL皁Vr!XɂcZu(1sտƿH24lnuI|MoR?5MBԤ0SS'6.b MVy#֍R7169x$?DNY(B)N>Ugz . O΂:uYgڽu7˦Hj; Z) \}?v.h(;DΈQH'4 Tj0hhL񖇆*,x".6q2(M`XePe)"3JJA8Ԉi[%i*4x6=h(*@C[Ӣ*p!ZDRXV^\}^?͙Vr7\Su+J)ZqIZ[J^ZH_{,$)bN& &  Ղ't_Ê8Qz*6$ERrI2'C.9q:R`DQ ԙ^1ѰDKŘBQ3֌c!" @ȃ(5,F(~&4ȴolp@HR %rvm1zY=@bQHLcBdS>@6)GR#M.|@LVGZ}^΢n.fUgssv?!jr,rr!8 ):jJdΥ|Vْv+'- Yw$PKJi0g4I,S0rpmb-ʲ[*-$"OXh8B`ʯS(uBVҚĔTȑGaGoY]#yP*&ZKN""o\55j١F7s7KᗏV+g3q^|zo _#O`} ~38;hr1aY.OM~{t8Zǐd c?ϜB3Kl nm$OO>=2v%7dZ7/Ǽ昔iax3@U;k־VM(&ϙT"YNWf5Z*D? FMn1ZV*CX,WX|u}}7/#jdgX ej$–wʟu X*|keeX@sӧhs#2# 펋ۆ,'ZW/) 5zʺrd/Gh4w?|8VE>e+\|l"9/ʹ}n{0ݛ79xsX%Nl03MZ=׿eǥ"{ ߇o iw֧ bcFT%c\K}&zuS\R]~7?ne]yCb5Gj%Φϔ$]Ų $ @Q07UחfNoK!qw9珡ߝ_M^S$ֶHgwslqfd-)"6Ie gq*`j߿*ּgk_ce^EjJr۠^Z}.hS*xKp<9v=:ɋ:#T'M)/ eqŵsUޟ'Wʽo&W_,jTa{W/zԌv!ģL_] 5NF #-:񇈯a %UnQ\oCV~laJi>2 ńS)(?erx4&.3l&N,׭=Atj#&Rb 7JYc8[l-箤GD~#ki65ݐE+פulQC!f^5+TI8Hn\!o<niC2hϯP~|רzM.ڠh ʆD%M*cmdqʊ`sV:Y` *֞2$erRy߼ځ)>^6]b@1ɲD41^#(wg#MQV!綒i}/!Lsfz3vĭcĔ69gG92-5(ܬcVB10ՕE\~'4akD➷[4%hdΡUoަzUB𫦙-gUgEl Տ Tx#x앚/ʺ.@]+ zi$3(XRG ;FucaDpPF])Ar&dGqXOD58\BV?javThYy¬V;Zj=FhSɦw:@v9hA XZXk IDTnd'jUֆ$G9]nz߶fv(uO<(mkG`lʢX"gg sP%;V*&q$@hGXP<OlUK Ô E:p-)Zur9 ҐƔIɊ *АL(iP3ΘXʃ ![GlOlE u[ K\S),0 yKF H5)r*I*b}o%k>DAHqؘd0uCSGIVt 9sJukW= w4zm=B:uӊHxXcX([| ZoJR4:P#4 10j|d8G-Km%hs~S%;ʮȜ"ͺG66+IɶlhTg :58m3b%Q=Q-Nі/K$cYrAUEM~h\`:9F( QDL&$U\H_A@ ' eR0PDe2/.+'pQXZ H>S[?iۉ|}ȝhA K@ Ì|}lIK gs7&Q<&àAvX45C}8 n|5g;Ad+ecO(]`c!5XUv 6a:wXunma sG,/5vȧ )n]0͆ϑ1H úsXɾ%䋃Z]cÁIV_Ɉie V] c/m͇|,ٍq$=ӋoacV1*lM7vWq?z [*F`|ݵ5j4ʆ~?7IhtKZI fc\RG#XOV`K$74Hjuq:Z `S> cT3yu.L 5;X[.J(as)4&%ֈrlk헬ufrI<N}wd0|Ҁ2!@*Dj' BM*z5vH}E``${GD{6, /lKuj8J$3/cuMN&}F`R( ]ѐ&0&sgkP&e12QUhWY:i[셙Lێۏ'2< `{vUB0.= ֯Ce WZ/Tt=8kM$'e?/'J,9PPEf'{.u$w` E4UL[HIuOp)%\k!^ teEѯVp+)]̕9b6z܌'yVkx\-.1:#X3./c& /A v_*V ~Z-bkAlgv\6A.9h_'!\+f㧘y9|xNU,mX&@ ņlF$Fij$Dr3ugRRNBPpSF@T7E&@B#g 83bO,FbO,F|<˔w# $ TBxÉ , c,Bnl H)65.] j>y~6EӉ. 2`-W`A(fݲs W8ƾ9U!Fs1SJ˽M%WC2D`XdߗUH­&Ӷ**fz/{y4`;g[]܃WR(XUujDpAQ XbjVrY0$:89 Qg1[` FSw~arΑ L~Ӊgu ]6TL(]o-S)_-CHzI;j,u)>0QP>MK?sKGTźCd 3sm>{ĨWK~WS"L0}HpJR`dwp=="G\!F,jm.֨VrӲ R?a5l0T2>9#[,LeSnr8)>l6~(b7̧w/V q gV:K=P9rAH('>xb4$8f;"fE-H N52Mx Z[Jn5$8 XDBPW 熂mIL)r#8Cf8EW2[ :kD KQ"  xE9A6Y.]9 LJޥfR: cPꀛ @2+b; ,JrZa8paL h=#&W)dj p cw{}c LiذTP ^Uuw#kBANJſOAM7s7^+2i>eK4սm/98%7[Una`gQ&y+0l.AlD/X'?d|Y`+0>F@ t8p@,^[6hrL V2'*=H@"O |,aoQ&llǞa ̇K&u"}l8TC`*P7 ; Tf5әBYvC\c57!g1#ob'ڹV^ ZíВvxHHpz;{;ɘ){Ipkfb>qvR1$}S9$Jt0ɕ1++93^\RMsk vG-/61FTKd~k}* :Z];cΕRZ-a*X;B>%q|琪ia!_)˹_[Bc1o"@>hHGCj6zf Lz0( pLGc.ia@@dAOLHk/R^H?b#``)ݳrƤv{y/ #t3"QK CS圻l~p*v9"ͦi띎/bBo.ƍ AEY5O*$Mċ X* +.HxG8{v&KfY]Uo 5Z/c=*i%k-HK.8[Ϣa#sMנdfͤv\3.7vTS-^4]=WqgDdaJuR[o/$RJIf*0^,caۏ ODs|T}6SQ)z~DSiQI~: W 7];C#gu2 j4vZ'ĨU'M1q<N> q>(蟏f qN`iEN"+4*女Q0z < L93 ?5Bbp5 uCӉ. &}r5%B1xd-9˛ƾٛSq?'nAty+}gŦOᶻ5z%7a;Ds< kcFSrȨnLmY,mX&XjzN͔@q7K{"uMK_SL2JDI -٧~1.~Z-|=뻻* vq_vg?{??do6פH}+ Br6^Sme{<8h0r"CD3"0?TשqZW+_Q 06؅-R  , Js4 FFij(۵̑輗bC{:[{~b1tMårfBš~}ŝ֙'ؘ{$HN5^ i8zǍ#+9p=S8p{ cX=?mUwud{Hn֏,b4@rv)t'SBӖkX.E73FbUWhS'pMA\+p6z&tnۂz{T]7 |j'I2\% L!FF]`672 )Hh4$FZ-N-hК ί\rLԙj&yE4s8*B=ZɁ]3 nJ#=5JI{k64q kf6"8x\n#"WSUm=Lm8goP5R$%-Z&k>BP~corM;.a&4_@sI˖:a V#v,>T8EU%/,L汧5A14& )g1L]3l̪3~8zҰs;Ӱ{A| I*dBp-Ubt~"cHRѨqM)sB9 1 -RAzF r)+s.e#) -ߧFWX=QT|vLC; H N)j C y5 T-'@8jhIB8jOOq6&m,թ CMdǹc6\t]vS9GFtE(>kJze&=#s3# Yg@@X),ʣb-ԖHn@UVU~0*v7^ZqE`w!d>>x(^h:y3Z dg3t})܋VKHi"nxy)>xL܎ؤ^L6ER/\ .p[Nέ~ڤ[e _&d;$;yp4y3zv.'c>U8:-zZ8sN.L7'`~ `fToUxS[{[:9J# BT(d4阂 \9v\:q٘/y8T3K\aRN)A~O dh~R@NvCHgOQooLXUKqT9)ZpT'HRf 9C[FCpr(3&"}O5VUk_dbon^LrY֧gQos/|wN1& kٖ z[\g>$!߸CgnN;h%Gn-ncH7.52%xPi/BUbP$:Eriډk$*3LU'ZJ1E5ʠ|o1 8ԃ.8c1qgqQ)Cĭ ZAjh\j7YAZ$&m]'*Hd#*ni)Kg!Z JK;g\VU{BhLIv;ɡv3jZ JD})hvCBqVnZڍV[-%S>^olZUOT!!߸))˙R4￞^-77W4[L).ce?W ºd('mRc LD$%ն^rpߖȜs%Քrb Y t>Rk Pp\?}04/5)ڐe 1r|wխ9Pl _x7Mm7!M)0 g Z5WX=w@zWu7WdC5Lȓ/ɗ˟ٱٻq0 kGpvَ;޾O {c?>7~@yXܣ4H'n;O䯻Zֺ>iyG9Rqy/:E^WluĺyN'Gwݶ:t^ӏ.o߿[]x# h)г8zShϗk8!i%'f(qvSb&`)a6JZbd$ɨkP1+e)aIX2].`&Q/jq D!D w^&`Oow>(_EazRWc=ʂ1ٍ#!ȉJ 2jx\SYI}y@Ʃ 0N7\CO5\O٪ZUֹU'Rj86O}pGxvr ǟ'o !FTʰZ/ M3nH/ sZCk2ٜm#X#b&..cB=vԻQso{ܛ޴\S2msWmMJN2#!rΙ<)bŝzk_Tթ{A  JF [({[_Y[ڍȮ!uE' Qu8.tRJeFhpR'QMBALLWW#Pn-_cZ3Z'R̵~DsҰԲ,Yk(N9@ 4d U ۮS7keK?G{oA2(sxsπsj btc{L@=#^6&ʤdTւ"TW]q{4A*x| 5*&CHg>d=ernj5+9ҭYۃ7W橼z2lo4 BrlM_>>rlװ/f?>_~R"ܾRFp-OEƤ]e!"kp{God*EK{Q?1,/?bSd'- \ 5y ͦ gPU? \k*G4iטٖBC/˭E<8dqa,wlՙW2 :& ?;ל/1Sqsfkɵ{|Abn|J$e˝gZ5g 'GWk&qvS{ v盪s[Cbm@dnWg(އa{sd]1)%RP!־^$RD"ApcZ#R띉8ѱ(97ľX4_'$Dk]Bg;E#2Awʍ B1b+e \1'bLDwÓ%k'ˆ'@ D"wϲI\]:1Ib/NѡJ'wμgy0&a0R7v(W5J~޲ʆXFL0Bˇ(7FxEC"{W.P=Hh]L ׽0% 238L tN,IA0yxQAn+s0pΔnܙ]LXNc1av0dҗƭ>at0$˵|iqn=zv'cXrI2S+V7カ 2g Mfep9:(3%s2/l@ɤӇzO A8ףTRYeFl0F="`O#*,4%rAqVS!8Zr@͓↡}#{õn!VQ\U;2_Aԯ`*_@E5fcw>vyj5T{JP杗T9Ѩ#JY; U9~F:gS1YgJg--[]är 53`PokƥvIE>GCˇhE.qC]Q#zj )[U@k-\E<>-F^զu |wvYvqMUyq!.(~-olWUC\z bޕ$"ˋyi}Y,vd M[,it$,o7%KDI$yDa(vU=UU֖~}JK&?c[&ڼMt6|8t"\;AZQ!!vq;5$s8ٌS 4oj6ni<P2<" 2%{gA" 0R%!QF1=NP(Xȅ9y/ssxdGsc\"YO.2:-_0bR6g{kj_iP:u}4^xǴ] 'c# >PPZ8)#A(PƘ J{1<$\{:fRrHv޺[eԘm.~ad=xiU:Vu7hZ_gG33=L)ވRqcH( ԰RFp *\QI?DH!+eiVoہvM8}xݾDyЌDӇmk"[rК9e\҅5OGk!WJ$Q*j'ގolq9Mf&Z?3 V͙?`=A7 c(f>_Sy1QNѕ߁BAs8"`j5@2ɵiqLPJ7Adu1^#c8~/ܪ#tMKXL^ bx$2kXŦ+ZCX8J@+U:7HN+n-`IH+eIT]0"DYNMq^S|mT#\͉{rgX<_/o_ǹ!v uI7ߣ+nGz"]/ӛ>y{_ozWakwn9M_ ww;#y 8w&k T |r rdnj16;_ŘNA8Z('$UI+;Q*M/R|L^H`5 1A@ +FA8{Js*L#U\9tRķ_R]KIT k{ҿN<́-PE)ccy^2Py999WS׶GgabvQ~I/ 3\x0L`bm?(+-#vU"L#@sROR2Ee{^o`$T$EQXًYXJ)iv@d8ˈE h% @ZZ`}a`3 s͊b+x y_$R7FX2˪궳%s̭wiaJ@%_X!&>AbA9ﲄr"p ={! RXar._FÅ g'gyK,>Rg%ZS٭h?뼥zMF}vVcpڀ# _Z,\-UVK_ǀL 8?n>tT$#x+E`6@;-fKwM?ѻՀ`ԀJ2SZ YdV:P8hRj5 @r62K+Uwj3%+`N`'9o\1Gl5*׻{PPoJj(b8GJ xn:S{uۗX,z䖒ގL+v7<ξRzZolaD{eb%|pxOWwp[7OZg+M}7DCO5ZL–کQO~vLORӄQrDya>أ*!U/j?0o9 djXA<m?Szc%T.btgq ΍T#*)-7Yx1׽J;ڍUr[mz;k;Ս4"ud /6u&EnNt-q~51+A#)hYȍo^Ҽ~`P~ $ЁZIp Wk+jFBNȹrXEj4ɸ7DI YSf?~vBX ] J Aj*RmdQJBT cʋ*l-ݕNrk5[p.Z&+&UXaSi͕*D`r╄^JT~BV6[8NXͤV*--kKLTP" A y$U@@AE2SQŤ!)0ADX|/ QL2Cgz|R] BxQ,$8ujG|YtaA! ߪ&p~y\|StU{,uӽh^?Qڇ_޾CCrl={}~bV|~4/ӛ>y{k kJz?mg`-G#OpNg?sLu9 сHQ{QDW ~ﳿ3D}}9I\)u+޺vGiX?g/YrRqS\Պ&o>:yFzzaDPM/$" <#q<3zV{}kЛ\D@b1`<pbif `̩4)"!C,bBUB _1.PPYr_,cqԋ?xdW++ ntLQ("" 2聎爦! qȹC\GrBL޵>m#CuɹLś%56I+j.z(8Xt  9v$*c-P qdI%](I`@ :Wؼc6 _Aib-VL_LV'~j&V,ќ%w9_=U7z;] 7|70{2 Bã7[ U㔏{́tJ ƃs΁+.6fpA g>؇<߇89YE*Mw'1e H\T RUaAO?~(b4ldtA &5ʀkγsOP߿܌m}|:kC@.}цnn-v#n9#2 Wid ]\2ғ_wvV2/Q|""S[q?~T~u RHHLH; 1QII%N. eZ)3ZQjRN]9_lpqV -tǠA .Z>b%mj(ݎ1IX@ ÔĬDi91BC[)tC'p&/ֹ?-BBp=^j] Cb׉1q8rQoWh.Ճ%V9] j>9J_ 1Tu!ntQo!`qŀ\|ALVj.[*xXOsWP\KHQt1܏4v}E@h~AY; ޮLG IEiߘqΥZqV!rA $@Ԭ}*0@ 9S ImRG&=uS[աZ +b ;!ڎ4(b۩yi}\8(3ËSzC \4q#RxĖVsA#7_]rR=F~^k` ߅G"yW9Ϲ坄1˖? OIx %oO,ŒRޒ@ =!]^(Aw堒'û(g!r|]|XI$9 `"3`Ia;V߈’bJ%lOؓz%(x1X%5CPƎ%qT, $h+rn)1[F%4 ((^,tE27kz5ͤUI4N?e'jv{5 qo>tϋLVد_*sėyC8L#,RVo2 $HBA\Ql 2Ď cA1FK4 b33!QR)M/@zB5s8wȩ*¹t@ݍlg?:wvH$ *hel-Q3%ryGpFGtb‰%7Ɂ/7vQIC]@'FUnYIX:m1Ioْ>pw*0Lv)VHqu#0 "Ɍ~4ʽ&73%TِE()i(l5E M  & >ˁAN& sUz3RzLPš<)P=N4ǡvzX9Q(ַ!@ xj?Jj%Nh=APa)aMմ̀Pr B`@;0B u4 纷/%ܤ:⣘fn'ABh01H !nN0acZ1<7?UQyg&kd$ui81OwxhnJpsDae&1bC9%TlIǝy| = o/ouW$z>i-FwfS>#轇[Divrϯ" 0IE]doG-~qnBZsJUX@߫ig8ޛ\w@@^"4JHP t 0UZC,쳟],}ߚ,>T"A-Z673Vk) q&%;#մ)V# F7Q֩Iml׎$!`V) gU /(k-s1ba;U]q@^ugT,yxGTb-auul3újB^v[]HF\z+%yfR\  F08P;˚8kV~ tY]gXXWR.}񄿮9*ݞ>EeG6  qF@n2'XƱ6 && jtc'1T/*:PPQ (2=p_cdpNn.N/̌FY9ʼn&\ N4ODDHBBn'8aJ-30䌗f)NE/'[J߻^K&YG<2ܪ{FvlrpQtY/L,} t!@JJ^s=>T^:߽(_ӁMN\u>ɩ_0Cvpbsgfw R+nЫm2EmP9,cT!O4 ׉+rұF++4&pA;C !|NPpIX("Nq//㻼n4Q<9J㻼nK=]7lBP)+:sTO.`v2G&kfڿ뱱_ I/X˺W[ $?l|B>+/l+@gZL**c5Q[N&z0Ơiuc X%*hIYwA#K8vnU" 0mUH% s"J Nq1Fs=V$Tǁ5dEg6׃[rs).}I)ǷKСw_t}t QxNVy%e Cvo=MoB\44C(U[ u-L2 dI } [oR=TA-gRAWQ3!1ȋ} ~1*RZS>-b'[X^VM_mf _ώ?>4󲄦9[J׆,؍nwQ_]E _=E,dy'>'h9I#DwPc8z(ݗ#B787n46@Yk~pH>7v PP0Âb #x٬WY?e~:SYVʎ>uL>Κ^x̷֨ ۣrl)yh4%Yxs@CL܍..W8xnjԧyOH*fRR/kbxΕxek@h@kIx0̂,`}K\ 󰞵cNbˊxn-/xPZ& &+F=2 vhnsJ̼3ώU>Aqw/=:`WĽko|!#*~,NZ$R)4ŚJ7i#+5Bbsiu +o OJЋ"Lo4J`tRB.qzGf@ZYt8Xhw?j$I_i۝ ÒufݭF;N̪ni3JX5eFQ L2`ZK5ՖRa,8s 6#ߟ$9| 6ۚPA(on~"˯/ #઻KZulQKۡ.?k))ڡkZ!IxkjGM$-QS1%֪T *)5&*0-h`3c.[>DMԸP>>|0_M#wod2Fgf9[<|ztUL jlסg_4 >h$?MuWS=]p;-#p˛t`M# T]gS>M|4{wE{n)7VC7hL=z5`7by":D]~tknфj.$'(R}#Fcby":D]tk*yo-Pօ"%SZ|XqK[,BD';h+ƣ`M9MEݺ\DdJ~X)/C隶[$hd]/n k rMnlu !?FɔW{w!Gڍ)|i,hd]~ݢ :\De ̚ =P#LFiiii74mѰ!3qOJ8) ȫ\1o*Y1b{rBY`ӹMᳪUqu Ppz?;]-?a=:(6ӝQ!^Ψӌu8ɭZF=3ǐ%M<7W\|Ȗe\q+܇vO#6d?#,3+%3sLL3sLULL>i| 3P2p2CwB/{3^)T^utqk#ݮܽꎽ+"s WgpkŰˊU4h4+he@e'8S4خl=Rw3mbϸ"GDqEvV&@RSL\ D5y2ư<9.tYd;00]| n cbA-BX"TaLfu9[#f,ͨќ2K+B9rV.(2S n9/x~G3ך6#r7억5ݮ%=f{see`zw%(L9*"Di;|+xj(05"`(SR rV+)ŇEl:Wcjg6/e?N角vӫ jeQ4Jcmr.%,g0tF3T"g5tfˋ\tXFܤ5H23JYYPuɼ0,ǤVUCFaT QTdK٘Egf2_hM 9V2*+$O.|kW邔%&ڰ2}KMӢȭe%/,j1Kb`n^xv>>_Ƃiӄ~-E(,#BDA!SM2T[T!NNP(5LMH/iX7!k H[n'Lbk8"hF,>.*F0Ѽ9'%MI&ensk JulsdtQo6ſ?s @|zK7N)HhCro7 9؜]J6;ZMl|z{y%p+Ut_ [9;Իނbͅfu.ΥdLWjyĚ PCL.?Y}/V jrO({EwVnkW uaq5 ZON|ߊj\g<"Z/ cs79@GfNXw͔x"*:qncVj!/L0Z>nL~Qk뽞6amX ])app|vp7v-6Rf}X>N ?,G3W]gηYYU6 h 20qp@>IWȍg@4$KxPc!VzYJq`e:wF"T k{GY܄ɊyUYe`ٹ %` 2؀B |P2ʑιʬ=k?92|iگ"%#<\Y}maÍp.nzѲp)`d"OWi9a 1pNgD|/xrujZ5x}Ywy- i`湘qmiX{j$DЧj[:^Lus{C* @]vß6엑cu,F[VJ:E6䗊NMSɆfl\R$J|rd7@ Vj'^`o"( io$Rt^Idb9QYBC, °OzK"^T;j4?xLUNa/ M0yi|Cwc8`RFZ`1j=84K~s}xW3}V";u1ؘ`x1 ËwpuxO/+$q=QSpr`&#>{71xi;YܢM&H}' `kGz|;G3E{ĔX?&jư47&|W0GdL`TE$}?ׂIԿ9h@ Z.9lwTN!lmg?Gn{"f}6x~M/ -}kY5ԪH{wXy1)_u㝑u| kGű,;f8[ _xjܾLIЮCrGҷ!I8$""ͱeZ8 HЮꊞkjv0$ID /ZiPTރ` @yuBZ۞*$I$d!Ay/Tؚw}YYh98&R#Lr65FG S2sx_͇Wo{s3['fjbE~}0("r c<:> ɤ7 R3u/n,̫޽>O8p |bi/{{\/kœv=N P<$#|<#B&CDb N*2?/OP%"Ltxg+-< +ߢ5O'2Rx5df-}5 !BRA{pm3CL3܃.BX8Q9*lm4= I)[7K5PJAIƖ|)s8`$FIDj&&h@sCP$5VHkûGu<Eծ{,x&뗶l8tLdzӭ^Mnw`A))[ 5S}$ ]J⢺qQbCkyQs:*~MtMzIրN9}cF5+b >o | n:~C'}N":UčQ1Nɦw#D4[N{*RH7]rB*"RY+bMk5C9N75JJ4 ִp G@'+vxp Mb7]c$L&LJ"P2HZsܝ`[vnXRMٵR'B$$h [>(?o)\+v!^{%._Ib sJ^V6ɸ$(uPhp * {3 įH,$t^4 5U)U:=NH [sNڛ$|\u ƥdRiX!NUAe1%flsF8#R=z t&[ 4o|`r "Y }]~GPIoU Rյ&_ J]qGl[wCvqG7}T*H0,1Qح/'k$퓚K6!WOrng+_K[g(9a)UbEq%7n&?w-|DNb vR&a²3BzJ cUU7\%'@wr?NᖽG&H0 ]ūvց^jN;a*:ʶ޺w4:#^D߯ToSUQ];~4\YI$DjgOZw5,9WD+З!aGty(gv8(|2Jkv]}em7??FZutN[HYf>|s:2~j%9vgn3 @ 3ǝi_jV}%]*ӝ,eyWM G@#-烟7)I'gJjuzh~~pC3.Zq!Wsr1sp$I'y"H^$.r<^o܁@ d_cM>ǻ ʓKlb"V8c#peRc_μ-F%g}f40 (5ojKW4j ssӯow(yGE!X#+&2XE$og%YSDdDYإ z2[{? H*X "$Y<6?pBM,$qf _'lBoXȲ\\rvg.sY<>lHK x Xt+-a8$D"*P8L+kX8oHthX Di+%U0a%8c2.ˈK E .tioWL&?z$KĺO1a%&H&*$fYb yNdO12slLb,Ӛ3)W9{mBLSI iOLªd(ur|O>7IL:+I:`X,.b&tXjb} DHAu"C`tRRR-1r.ň$׸$uV%hKʚH \j*QXI CbbYBH kD+G{m3Fs VKX:.pZg!d&)ƚPɤ؀ޢDNq'7}Dj }wy[\-v)b>]sFG"f|PeUſZ5kkV`Xn W5 $8P|8_~vvC˿A}y lk>Ahq ,͟ޯٶ9Ï  =|Tyܝ*Iw@ njH+D̅He ⪗<˥ 9H/.<,!R%ƇH\lWt>Lt\ŊwvzZ|E5q=\~*IZ6~bToZv~xJ2>~#Q}Zw;j铇N ֤dwqoƍW;+ ʴT(GOKBRX !mdT'MQ„ IsnPymL4iCw!}?,ujMq hLؿd[%'kf '4ATS"rxu&kWba4FCB YJbi]4ݧf2yX+Z=M[eH XcYlT1#N";gR/bi5 ET9$ ,HA@3cO ;hʝݑW nǵm)H\wmKF$'!cIy4& U"њ0‰ȤRP c.3g@"E#9JǙTZ6j~Yc`b^%Vֿ?VD_UX&Jr;&AVH1JœBR-@::&鯟~Wy^GvYliFqˋn:C]w2`&Mg!7^k1Q"=ߙm2Qqw&~Ljk= GȭR׏؟5l2%糫F ]Cl#:OVދq6,nYAn&{d65NSx2r_<<<ʄ[N`ITTNfkvyUY=@Ɇ^1mqBO@$Ɨo9 ۞T!+쁭9ݓWxvA;0G4|;ޘ!*RTgQ#2K&{~tDw5K·n.!qȸ4iC+6lxS i?i<([>tv+?N@]{h3bbQglr,3;dÆ7%:W}kQKs cweR+D֨w֙~=z ׽sjY2 㨠NWx- ?E?h#{#/H!VyG17#OL?p#/8j }LF^-;B3:#-`O]QEM91*Lƺێ ͠P׭{%IJp/Nɞ7k%7;ab7WG=879/pS/hmT _c^p10em^؝%Wok[SWi#Zpe󳒍kz[zko=T}R뤮(k9 lW0|!4FNQ 8)kLNɠHKDG~pȣپqJu8b O,Y*)-+4Hܪw4έˆKn >JniJq sGnok]Qe kp*v0(CXc <=d޼71D-Sb1wsf;Xnum$(\Af|e0&Y,J>׻| @,Slȿ#c -'(\&gp_"a+ 0 ty bWIN5n=x) 9hjٕ@^K;%s"hCCuJ]Y{l,HuJv)Aj]Zz )qAP]A 9sGa).zc<*ŌFOQjF8S_=bt\FRsg#AѺq4;lc Zc܂pn#hHyPDYbw iΛ,5oȌ'i$DeXT0Bp&1F#Lr!F3z5C'$_i΋(]a? as!% W_ˋE|vx;4 {|.t6_zyo|'B,)?32Cg&K.:5yEƏOG?R|#ї9WdܜSQ)lTrEHӨh:$QJ%\sHLHZػFn,2Ŕ̛ Aw" &̗O[YNw}*ˢ*m#-|G>"'Bnf;VMҬ+ϵ} Q5Rgb~L MԿHԍ5S׳w7x `mC] 3u54QGuX˽iR:/ectB^-XS\cUL[QHR,1nT6A!&݊ ͕n]WDKTۀ9VsYIFA g锎t;!$IbBs[+/ђ5NJp Jm9^)$x&1/ XF)tw5EBQ۬ wH<'gd C'ÈKB:RZ=.Y,Ҩ( #e&,~WDv[VH`!sSg3dm̪d 3 a+I|GMV]-DR?LNfz{YUZ񃟒˻?;]O'cs0ybAw\ ZH|^ɗyCc&F>dK缡 ɊB0_"> ]^^H|^ܗyCsmZH/Xь S EX{.YAOo..d%wz3iꊣ/ts4 Q$H6P̽@65|Gs89$5WV7z9*\o[s:ɳD¢2t<ʑ ^]NV\Di ଉV=wϹ_MŜjM{4sq2O3pHY4pQ`1Y!~m+V@9BJXJ mBj#}yE"+(w 1ġaqQQK)Le2T AE_va<)QDBC3!ח^Of_M:^Id: \^t@Oq`ڦ6ݎqTi|堿qW^FBit r \K_Jwߟ|d'&94%XPle.$ɐu&f #Fb.R?ՉTytiĢ)HD) u@ВYi&!pNL`^jLNth Vɜd.$i؜EI`%fOmj[C3oa,f|4AfߑH9۫^!Q/jGSlƢ~n}-1%jm1Vؚ|]z;,9$` 8\%yv## F(#HZiđhRRm 5ŠS\hTTmg".d.7uBq'U& Ϊ(")j"]{7Pi@޷hѓ*}o'~2wl\s9{*TyҌᯞ&+ >ӏQww뫺 ցHX %]BV- 3 + aKH!\βVk7PN X:, _Ҫ2KT(Wv'a$@ei^v*){'bխU$8FE.R[[1$=?No'aZ+ JJ#Cۛ,To,޷#썿G:Bv35.@^O?qѡ<~͸?z5/8Q5{?o_~8ar1n[C_[{~֓dr)FOx 't(F^X S{o~`k3)Fu1e^(0,౲AE4U&'*(7+7X>M¼W? ( W_E}@/1U㫳-vmZ@fKGi([2LǂM1Rvǃ|v(*qN o%g_8!q1'j˜eWǖ̐̐Cgh/n b>)OX8D >+}0~w3J31ۨl~Jt$\FWFe>Z˨fRXB&s`u!ѭYwfIK}Ȭg*'72ܨ_c X bXM Pxڶ42+bւK~/VFAys2kevt5Y?3^_"%:̷,[,fTHykn|% )MԢ* X0J ,@#iQ_Öj3<>wEJPOu6ng_*7z0F{áassX8cX~Dco2,NQyXu5M:mez[\ۺAb={;3ѠF'(`!٥40c{MZ(8hѠS* N9#=rz=gd]V4Y$Q'3]5g $DzE!0F`](G+1@^a&+j49ި", Yk8DBPkN{R-bwͱ^x|oyxSo6/+OO\ ;F?ʬjQ*#ԣ(A=JPPZIA<HMU[ιH$QV*7Nqo_0 h1 |$q'ŬT * 5!wi Z`D ilVQQFj(/ rc9ZCK"1c#F+Q]7hCorQn5Tse HО~|&vf (Lb};]3R ~Ho)[F;T1% 5c,d,qe [`3< MX YЬO=#^M{{ {ovNl ' lzZ%g(9FY7j;/p*O8PZ{bD yBj 9p^:PQz;/f*'4a `/NX ᧞y6 X̗BX,c,PGQ8x<"`9*$‘15r]q6X1]EHR|,\]+= (# ǡ5GyQқGIo涆{!nGLxk$&+`A[[%@Ac1F`i5ҁ4V\d9jH:lV+\.Owb:h8@:!!0Q&R,!ᬛlrtc% &KiE8$:@u 8NX H_ ŻMs17`$yKYgP3*Nhur,s 62ca+bGVtsˋYVTtv$ߎRj˵2MuJiz?㩟O}騌R=#$Œ>}|' _a1qZW+kn#G9‘}ؙh==2c&THŪh%Q/y+F,/]2Hk߾¥.zU]*LU p=~zpPJ%)յ8tV'i5w{83D1q~Lp4]!,1p!@(Z?fy11n;uY;[zQN`V23ΣMB+ `@~((0 .e;@U&BSN(5,gt|S}$hKj\@# MIk0[HSـg.3-n3 *D7_3PU%ɟ(Z5D7W`h TWp SBgGTi{a| @GSBKFP7s&@ iXk©q¥3WRFs[L EUh6oyW(F/r"#60ɒQEeFHQ&>> oQLCC'.`< QYT Az# xV F4S.=4QO(|SJqݧ[@nR=TDk.360T*f#lAY)M0"=7t!UL# 8WK] +|y!$:4\/ON@WQqƵT\f˝2{)W 3xcQv j,wƤ^jۂK&;-bȻa5f2jP*y{WGJo_Ԅr2 P^+t/'/>}=FvS@ۿ9dv fH@:_3!2nm!R6 VGkړ=mtD>Y8/f_JD xC"Y@\-o#Qz)aIW\\BG7x+_tܫ :6YR1D{z{ '4wҎd*w9ISWNi32dU+hwO<0Sm{7xu(!tJ(S/%{8cz VC{M7S<淅>*|&lgw6ZZ_~*$!z 8*d\ '8+|YH1$(.eP, C5rn>rr?f ~i:_ŧ"];#Ma//;jdN[=WR, M*Lx%tqgܩ٨8eb1[E U9`^J'\&Y/a_ų_) ;Tk]Mg$>ڻ<^~|2t|zb\o887}=!12Dnbdԟ cR⃴df}u8}1]tͪ/1rct2ls<)hy΋:9t}-EL^ٱ^.[G0Dġ)@_vZ(ݿ:f%iE╤&g牳J;5$|;Gfsu)cV0\*/@RiAqL 0C)- rF6< V9<'HeJ2*fH)lLImPzlI08TK2y~vlTLo#h(tI9vY,5VRcMT8TV3}*x5$RC;  0 s1ن>mah0ib* ibmGQav4haT{ Йp!qlb S:/ Q"ƨH+VJTpRq%\p$˦,FS5;$:Q)o]~:xrRMeuw7&p3RJ󇕾W#:nE|y7>UN`  SC31YL\lɪ [;AhaaG%@6K?* -zP¾{t>}ܢ9ځTOg_|v la dmy+$4EctVJ\.9s$ w{!tq6Lt_]_m^?_Ov3z Uwh'ׇ-JJtϦE{bwXعr"iCyI6Ԥ(%wHn [B,1&)ӶChnT ~jU7hé(^xt h#Y.97 ]!RHq[Ak 5~7'[C+mܐA%U얦1x(|"ro7~/|~H!燺|GpѴARD¤ w 6*8fJjLJKe'4@>J0G 77 fiU7 !۾fd,Ρz ;Oar[|K>h@!O[;<pXgth"2qdC7TfA.hke4BS1Xd 6<@1 FJ0U[0Uzxp $^^s]^{c C-nFW#d* 3g?mҥ{XYn^.erk$_jTti|b(E~y95VKp:|a}A0)#HM DŽXq, l|}vc/u*rKɵ$ ɵR4hx]N$Ĺ8m=8E6b X1Ř!4*Ør* (=↶ssI⼩k۷ aPњRE)L@;FT` ^P*P ib2PRvqקr|sB4*d8`+" q +"@Ҷ֕ӳ#Wi+ x##YEyXe`-+kLYf&u?n<ςIMrB3[4?,Pǎ.WB:d77*oUllwbPC7z1P5x ?Tzx_a E#8ǥ{փ~ۇ gBn>+ a^=h ڒнVhDV3׊aڼv|jc+͆Hӂq0?ꙕx![:]j"70Kz8nm UbPg׷_Z^^;h4OKݭcusW ]4R!<qA\{/`*ߣn[`;-:,fzp *y<ٿ.~^vACEEwctŦ.,FR]4<"x탆Yfym_sX\qT8o2]ˢD .+$<>@8ӾCwts~Qêxs w`N$# ^_?9}?s\ 5pigq–bhI<eoCN-ȭ纷[?K]Y+}䖠±aYX*#GYxrq8aSCONl]y" Ճ@a AOYa"s/rB1I{V0D `tF(xYa8KCoWsQ!wV08iGYڛ ہK9 <݌v~2+ mGW#sAiOY-|q_;{|x)iw=_q#atTFK=0ϮsS䎧8D䂡e>3Lݟkr%D/-)5˿^,?0ЏoK~|c˦&%Eۇie['T[v<w],&ܬSPQ/CAhSXMr92 U_3xKƢwBUɓh j{m(2X(]g鐅AGiim'cn TY tڧ+ĝP)$ٱ֤OgUr:hw=/j8g^dHզ[C0 ՋUw;-7XwK .ƽ|2{݋' Jbלr]3oq4>C9}\NuqǍK`ķoe)g;=H?.P,z>O#t˲ɴw XҞgӂ{{K5B>H9)HC0XiH<+)"DѦ;P@S+4zN+fKpp[ .BڸSSPJc'e!;* ,qR?YPWPDc񝵋 S Gɦ;e%Lv(~_LpQ u- E qĦU"dWQdCh|%{8$/JvKm2'o*3"uG뭕SY$9;NR.W"PdGt{կP5N !ۇL7V -{q1:\_V &~P[]ON{`as=z8IDǻZww'Ƀ"K+UdIeF;|ձtl<;$ "L@yS9a]/*U7c2E'3jRFujf;I&e.P\%;a~n@ֱvcE*C4vJ3&32n.r>:8dЫxr~rXTqG~)*IL\׮OTqc]mM@P& cg[t$010G3y?&W#@И^ )`8 ՛Bi5aLNLgc&A 8G[ Zԩ+A~)~}uK[~&C{ƸmǨ~5X1r.s@>F.Q{>vV_ÎIPD02YJM6rfJVnh3mRY˲63C@`_>0 kmk+{ay)v2%ArG2ZN h$ED Z8(l|G (5znY~Eng_`9øpI7Mwry&E_rUp7]<-~ `32wJ)=K;wq2rw  ZrX ̈ppJ)ĹS_z]:b#+ݲ*'%]ťϽWUSޫgMWc{^1ŬOգwhwͣłG{=ˎ "g7$XoH=[ߐ}!l}CSЁq!a>rBX,`[ TITc"~oH$>_P oH>[ߐ"u!,gR7jv]L:*fc̽!83OaǺdL<4pf8n~~yϟ![Q6eMah3R_gkfdIWv^G! D 5FJ#=Lx&* j\TmZҽU]ĔiڮTBhBp*v)*t)^rFFy)AQ1DMPMPh-ԓ8G*Fu5Vto֨G9a N4T 7d_MN·FM+^"n(sQS}{ץW…wn6w[OW2׏;Vndh /uD!>떊AIF!?J@%hڐ3$buH bPEtRQǺuo9e3hhڐ3$95cݤcw~? Vr&_ֆOD$}kHJt4 3b'N:U]][6D:(&5h5$JM[Vo)@ȑCjEt#X(цVUeSDm˚M%Er%[9rV'`!n\ԆVUeSDm˺MEsu%[9rVTپ э"Y>zȺ[E*jT|Vuu6-stF]u en5 !Z SRi-1Zݪ t>cvN5V}KKVB0Y֢҅m~nuP :Ut;\%ؔ5UߴV6Հ#h%LrOF3u~ PD[6j!H]KNOKpuԠ/&RGY(SZlm1RQSl}ڨ:uN:=AbB9Ր{ 뮝%O xRkz`-MliU;DJDbV'HǬA(sRb eZ=Aa9fM*TĬ%fmL)9fMӾĬ )):Ǭ)…JZbji5EJV'PT5EU}GY;f 0%;wb>V IĬ i9fM1`(1k8S21kY;uȬ)EbV'pq5%Pg%f5c81kYޞ5>Ĭ&lmg5!5NZbji5(IZb& &YKZIX5IZbjU5)EMQ0krd֤bYKZЅ;FFYKZcEiZb*:gM %f^Oz!5Yڳ=s3kŧu}BpzyLۛxrs9[燀)EOrg pLHόGEfvy|2Le>a|+NpA7t+(^w( 4^W-p x, ،Y3K, q!WО0B-Gō#RiEQ"851G hgF{Kb?% URU@QI.\>rg(?B,Lm]6QfW$S1gY.ŧpj23?<6Ui0S=\Pg S%xoVH*jĈ g~ZĊ4K8z/ƃo`nX]}&.![3(˘)P!` G>JtUQ0 \tajTb 0}CPA*+J**AxѺ !ApaeAhnP,0, 1G@**BB, V;>DU$wpckfx%\NKHUQUH<xK(8T!DL@ r%%#B].Yi%Ҩ hgD;#货 ,hQuWR!8}zpZC3PiJa> g6H#2r:*#jHJVUXh& eeD@9b!,퍇*I"qU O 6A(%UOd;(TWt|;qonsM?x':;n&0 ":Ȝ?*#Mngëf˝{Όz6,ا¢f1_Lo>ƣ% }Upˢ,^w eOen_lAx<˜9uYKZx$ʌ{jc!L\|g|$.}iۜ+k * ܍GO 7aa1xt%\0~d|oq6˥\^tQ]>3΅4|N{'oFÏ7s^,_Lk%VIjXaA1e&fAlXgys~/Q*N84Ȑl,=N`/Y>+cWS }Z"˗x=da¡ ",KIE\%hZarn,$Q-bjʜ&ȿ9 sڿ4d\{\ID9Z q7e՜`F|t^ g_͵NYͲԗ/„,&[+?[8LeLM$Çcv5}BWPwse>G>Ǧs #3QMP4 &HKPH>& ѬwfjsJ՜e!0:ɾs*Z yYgL pFCʓ(Suz|TCJ<X-Y=qb_&]ڟ$tvz)Or&X\/Z)׻ߌ>ƟGwMBH*l~O>ܫ^b.ʬ`. >w(E\S3;Q+MTF\~*Ëh{/y\]x7a6΍N{c(4A5yRŹenVdz`_mƵal3ϣ˹)`Ӓ?Ol2LmeҽWHiF70oeM>)cQ!/{qݼ= 9ѮN_o( ٤462ɼd1R_V`ˆ%i$QTVGPhϤLgJe@Tkܘd*lFN[哩ZWŘ:Y"ͼM+a45yx;y$1cUaW/Fy\og;;*;g 3{~*ǯ}̙̇&}#OX o`" "C4{ޛ?_Bf;Lg˨ Y~B/2OpzY=};nXCJ)=ݻ>S D<^3ո 9|hG_?` XƆjGqbv6ts! h1"? rR.,ᆰA?G* 1wTwc,unJqWyGXbE3<=LH2`%;ϸ#PqI}{4"P"U<ݗo+&9 db?*]+|0PRH5:'8 2#' Lܱ|G#v\;ό068xiaB !mZ~C o@H L>X:H$'΂K!0jrWwU uvmERwֵ俧zHJKpHJJTwUu(xʋ@в4`-:hGkz!9R:9UwԆ-a_PWx/7uspsء qĹ~]SĤ&U+C(1h12GL "V9ʕa iA˝[ XoTGt"*XC$,(9gWJ?/kjYt(av9eq[8ޛ%X"F5|glY$P QTZӊt` r׎R\r6H9!``)ܲPp:Q%g1GlndJaדy…qTj $2ҡD#/oeBp&4x?/݈FÜKB & H;?Rw{S@;5?aN+Ƙzf\;9.T$BϯC!?CM!fHz>c<pŌ#Tkk,7ND h&dc$C}ގsk >Z /Xm0j'5h:9]%Fˎ=:i c̀Q2d44gh:c9zhkj~4 e 59:^cVZ3lQB<"DE}u)e::4˂Qru6|291J4tfS50(ʴy*y; MPEsQ0 uu9= ]^e|Ncjy9cJbP=Ը]f|sqL[HUZ4hX@)4$93S,9rio ȉz LJrQxIڎk]hkəvݚsJr~^b>X#5QNQN5S5kJIR'Y"!Od2'‚6"M{[T,9^\uN5]7^5i?6m%ɨa ( (/ N]Bo3Nki&S@!QI\z%+5#ɣ)guco){`v*ay(g;r(g;Mftira%S4F6 <$,ށ2i}&AX]7]7ߞ%a<6,-Gأ{[`r Qv-SPe <)0"Q@  IY#`Onԉnpn #r0]:EK:1iK$}g Uw² g X.B-rWe6ݗqZ҂N8ռa>xs_?}&,,SW:ɛA7 ym( eߡ^K+ ݽf/+d|($j54ѡ>!31[ЍqD5U1tfϘx=zpsl:pϬeHBHqLH DžL$OJt;\ w,2Q"e,0?4D,2\\/4:S'>L/[FjQT(uRbfFHRr;"xh&&e&RdFĢG!4lEeRFM5R/,g w©WqU6-4(dmܮ>pu='OۥBg:';_P PX4t%J ”-7$ C90ji\RBKSnW/_T[)M 2ᕦ4ercLfͯR,25`49IکZ\ /fD zgM$n3IbmX5xaM DiD.ZFFix o͚sܮ[]>Z5׎Xψ+#* %AU\@DFdxM`j #jԐiކ% aV3LNn͈sܮ[ AGPuCUCk"x-J5)*7dimT$A:TZp&5ft IԪ-gQ$C['S\5]sR"osZX恿ͷ?u‚VkvrivںMfE9z|:K샃p>~!4RPnuK*ōFC$~g;>8wO3NKV8^io&Flѱ@}.8%$JH@e:D>YN$AK$AqYPӧz6|5H57EkУ,dϹRP*Pcqt/ "f2!"zxoWÞZF6"btG\?LKw)$ h3"Y 䗫[<]oX~ W7IvQZI]Lp 'W/Љ@F ]2T~O<,X:S?rto|}zzzr7?*Nep *@W㠲1_RVjA#:o%o)i୽?.?^"ZozՎz+a m[OX^~) DWU|Tt_ިȈbOT:,֠,oSRoԣz)FP<ֵ'%Ŀr``-wU3`# կZoW??ǶmXnGL+gm-|COvHEO9ְZ=Fh2g* D)+MpAچhڕ۵)ii 8.>re1~^J00`GYe?jjQG6. "A'Iϕ1bLDLc+IRjQ7ߚ)3\.y~1l(*ld^8Ȅ!n l`N(T:k<-StRAq3vr4 r 8_fRClZbtKD)kr>.nr2I;jr҆ʀ}޻횜Cݑ F.͑|m{z+c9o+;FSյ N{ s`=} $"E77Z-Ͻu Rϭ]mTA3wA8b/d5o>v:J_I`؛_&דۡwbPq[^O؊ )b|^ 3kn>\ Jc5P`D!)^};l]~^ uP SiJ!%1^ 䮝=02 B|7ޡf/n !W5CIi;= sq1`ejZ}mnSܮ[( z3z ^Vq +Qg)-V=_J< ֖J§Շ0.Bޢl~d_F>Nւa6'V7áQsrԋWH'C6suz c~徠By"iō*+!U\Z嫾le_p6`vE`"ӝqz$܅x?i'BxD^}e;v6;Ն= Apt+٭nlRlGucOQe2-!2j&m|ꤸ9Z~?>;W(V&<%o'Un|-sA$_VJxm R뗬\#p뽱5 FVi]]I5~LRv8c8|Zx1Etٰdz!ѻUFZ!iw!)y' V1p6G `Og܁JOݡ2CrJtǺyq,Wq&H$[5O3Etݘ"z߬+bOd ^$3` 3GJ7=$\9j !_)-k5A]br'ex^U.'5='#޵+"e12~Mr$;g^f!yub[Y[lRKjYlu,tL&RM~_HDILUa0v3fa23lfLې=|"{em h$&üEX㻫Hzݕt }ZdmQN%]]o)w7EUNI>IH't^<g&kR묲MͿ͒HygN ƫ y*[s\0Nw%ء|[WLu~ܸ{^~:/wD">}nlwf\uE^ [V[=#YۈY9F﷛~y &kk{xl߆k,:sλMv@f1O":&:9|߄6˸m!G(Dlm\6*L8(})#}kmIB˶z"i"|Rˎ?ڊ hOJHՌ GtûG }+# 29ϴ* &σ2u|VDم_ifrft'wJ!̬J%LeG~H*Id̔~z|6eZ6hq|j&qvH!&gWϑ/}.݉Ni;^?QǼj?:[M]WMŌT!vs##J+c^ޛ+w,"b'`CuߋHSƒVf"i=25NR WvZ*n1uPחD(ıTal_PdR,)jbUOD}" R˧X-0G+ ꯊ`dmz% ǰ$u҇A0;+THUtlTcoE \- ]+@BK7 ^3w~w~?ܷQtF}EmTv x8- μˎ3k6:X.'!`cX\'h>_+0.. b ?> b#h7@mVs0~Ϯc H U_\X/Nr%98%(7'x%SZb'.((I#Rc8L(Ƴ)Q|g-4(۵dJ> )l$.B \X  4EZŤjLX{vB]G0eQ`yA*1'4}4\*$~L\aVʼ%Ep]Qt GelouqR4AcŅy.(6($ )aӈ z84HlomZ*(&2dHQp֝{-wMS']MM%`KS*aאLYRf] n$FXKcA"-.p dlm 1pVڄr!)k0\!L,NFs@ fLp`|0 y.Q#Y]VWߏiUQyTU/=^TE[]?SY9}ϤGdNmcCIsAh v\mi,Q|ؐҋefg  ( +\ 9D~p R06SʔUs ZXSFq]ps,fHHrF֐(؉0}, d&=M0" d%J^E&e ۊ6 ,AwM2(4aVAA-@qLt;`,VzBa q. ry /F1B}1yՅW[zdQS/o/( MdjgBd Q]+K&Yeˋ~7ĂcPӄ !댛%Oˌǻ(BP! !׏F1pH|TQL2nK瓝q6ZhZLy&ށ ÃU$vqIb!>a`YeV/WZM-2߬) onb$EԆs/ ge?ٲ5I ymWy\WX>.$Ac}1KP eρ4-"jht~fW,QvjG࿎go&_6SҞKO ޿ 6V@M@t; g5qC@1$0a FJNjFy ) A-%}<*0~yFu##r:h5AGcAab;;ׂr#f[Z={t)[2O~KuP+K7oҴ8x7P]J*n-;ݯ_I˳z=Soh&QJ8tN%B 5T`}3c;&]8` Oub⭭ygdnPGWxf ?drqqg|u}|<2OaCw|dKo1yFic\ ʵVJr.?6e֐jJn>x>HE,lprdW3~ `{Y fXPqjU˵mmv;Aŧ!vfqe}է<= ĘTYg1%S sЍsxӽφ`>IC^uZ~bͭ"vkRuW/|ڝ$9l&1$_5zTx!,ޱpXdȕ(Ј B1) UmјzyťzygOg{;OպLf o;yq"=Jxt#/omiAw{twP|Fav2)rZmeF.jӹZhЪ*z.1UZjn+qDwK/.MdJjb j4ּӵo*GzFgc?VOQ\tⴜJ<hL+O.Be3(Oǹp9 s JNlE[5}t$V j5c:f)x*,2Drv4G`c%g5 #-BP5gԂ@aMVycܾKc(7p:#nl`Dzdp>j |-$gj|}wDUOz[ F k3=HLn4p̻Tn&,Π/ڼtP /4 =U+]WqN?QlF]^Eu[~Ӻwӓuzw| s!5.?|OoW7us33&v@ǟo_L|b_4-wv}?*kۓw&eE.&nXǨ;Z;Nalo}W8=V((C&1+n$[WIu߭mXźŗeӳ:ϿDiD|k?]`Y{bV}/f7eR"\p`|.}!rj>bLEHD6gev0%LJ*sC'<>_T7o\IB[g5:!YZjv=>HZ@c"1"tG" ^&_}'ERxwX,h=Iku 醖%rѤ߀$ې.7xdqQB̉R҂n_aL{VYśבub[ȊXW_m?f,ׯ t[6 ̀ߪϟ?q[>F}S/ NFE"0$͍v"Vd]ԋ5+p*hk(\,Feq JښFY+6]?yf n9[b窕8&SZTšB6%L-n]ߩ$- mbXq4cEhS:EI?I2S ]2k:{x.E͠v'lƐ# phᤝA< و/YIB \zӼj?yVqI2{9pSap2fǮvn_zwn_ze۹ݮ˜d,geaAq+fx^ZZJS$GBWpS'J:. Wh_X[9yү1vlEK{v/ҫ˶n\6(JJY/i@Ji\a ]}fmderX.lM(iϦN~|&LzәTL!שvӯ`UFk߄698T VM 'u QItLʊA%rJ)Y*֧a]A<{Ă|t^gWq<(4Gk⨹ / !8BsGƽ" vVK\!1Z)֩<@sL]GM}in#[Y@^+qi1&L%I+fz}ܚCSgTWet@Y\d_2gu]H^)&AEu-TVj2q| $л+ҧ$G6ZrЮ -i)֮(8T]i{JŮ"@kWW_M΂Ѥ ĩW1-'H)uߕ[zv;VǗvJVHuGT3YuK|(J,F&}Hq&IOT^V({MrE*,i7rCW..Gwf#G랕} h?_N*ijosHͳwH[kQm~8)QT'9j)?F{S>U yEjQ}FթtN#Qv鈍Z`>$䕋hLӷ$ %;햊Aݎw:Q8Q*+*c?{db-Oo :&RUT0UZjU#rysz+_&gw?7"woٛj7^ pz{cIAI궞7nY1'3U lNw>N.[O=9(`mՀ^=&n6@#/aKZu͒{FSP),b[,-f> Y^RqmP_ UˑVMқo"pn6A͚"RàD?ڭ)Xx+>C:sx OkD^xլ5jRج,A+.ٞ9~l Uf<$|ڙGH-UdOgMV94攍OD ԶY祗K/^>/ٮmcsC *G`U \ T`cy XЎ '.dkc[Z m:QRm; SR(Н㝊ff qS:i9DQ+BZ|N22.w9ALhte]#*&ma$j )*,* p@e#Z[+/A Mo TdXIEDλ1tX`xk,Eώfٔ`Dl434,`(ybOڇѧ[nOp$}J5ڷS *AT-6daPZh!xG*⯋Cr9^P u-? S`u-%+ 3Yb Xx T7Y;K!`ruÑd9$.mqES˄K0=A/nerE =$@M M2 'VȲ\"Zάy}XtO<'^ov*ՌhcQ"ЇE tcPnqr(Gik) e`WT8t 'ׯ>#3vRB&X]h-"%{r ;yS[Vd'ͯ\h-7zߞZKRzpYп8f/'7mf"Wɢ3VʕL4`,nyB&Ojx2=+YxnE&Xd{]eEƫRð]N5ƯŏGI%Q/mm_6ʞ9^i)U1y܏\7MlaFGsE`cgWbyy$ 0/Ik}Bz;@]K_kf>F¢ɳ?+}+9CY?_'+ ܖ@pմP . +4F7 Yn8Rɚ=Y0 am IGRT6gra+%OQ-V̰bU|5X+@/J9άp"V*BUƬ=e 9`NxÝAL3dzǘՂ˯ {x i⡶4 d @8î!lBG~v~a]qq~NIJ6 =!6꽭 9C+$=H A:yKtZa44hҼUj&o%kVM~.yPDE+a׈*益(ӽH4m'FQ# \ByԌ&wtII#Ǡjvs]Ojp9bzS_օZW?Oc7JVo'ۅV}klv:#~Sש!޶n-]?EcKh =ח5gAj( `B @kA[9q06äg)8?DU0E=0 `f] :2+O5=OS'xΎ蝠8!݋:{8!_eq@~I_s?  (D+O3ߥ'"[&~7O4!_㉆t[|FJֈ|pDrO4!m< (z:c=P/ oo&=\ s߀EX&y>wQhXY4JXi;?l5?Y6*%w!+y`\~~mMOҶuhvۃl!8Ҩݮ$8u'p {*c *mͩT}쟺d.:jhZ*3:>{'?|̷lD/f:*f.5bo2:҇_Kr|9^P̮>gPVZPsV ^p U)*ЅE.lk%IL~-RF˶IDo'F{No| ªDt6sN?WIZ<*'?X+N>i`s̏ΛtaL-SF7Qv#).e ik!Dy^{j2&*r@*kUa]UXXΐ2e P:7NBPi=#v,(93Q6rk4/IIiWRX bUkiLY{ 2 3# &!:!hZ4SL"1/ gl [iu* ۪H0ˌ֕_W{k&^z&^z&^z&^>3R8uK0ҏb)*WIȍԉwC2v_[Oa‚U&L,k ے8eZUVB( ̵i=VTi9H #.oÝծUG7+(?)Tt)=I' 6*dP8 1PeWR~+&`[ Ǘ}wҗɬ ﶳo;zehCR řJ-iֵjН,-M9i3l٫6r!M`3e}8] fYupɚRmlcn4cmc+x;4&Y*u֕VefeٔE̼ȐR© ٯUNr{6Vxy1Klz!5F:5uJ[I(^#"C$AJ-8bg=,h|"Az@fưuMf&2c;2GLʣm7xʳEgL 6z/A^c2C)gҡm|IF^9fvr#/ϓ ?L,NH2IeE0&̩Wȳ̞c+ED:і6sIP6ic`lg'i(YMg:Di=؄3(._Z:;ul[ږ4x%Ԃ+bs2pq!J5Bܶ [Ȕ+B" /f50IV`6 3%oS&o/ctkVh(^PC GP?c Gn f8F u!F @jB7J0nTC! n AOH셭 gt8c#6"F Cc=]ܬT5nm '6෼?i͙%//N}_l ŖMfՖJ0lH6#yjj6+l.a=zԔ Yꄙ԰J/;(<]6t¸Oͧ d+A49~?bWy̨=g:Z+vUn y@XDÚc 4[qD9ٽjzi{Y}`Uu۱Ta}3 p[C{=JCA;?# zP{Fr7IЮ8#zv-Zt*#w#`|ƞ_3MHul,kXnrpjy-'bbԉ:as4[_MhjE<3rGq*?3iD9q d.lDpK$-i %R#IjOJhL'ٮ8EO9\ݝA~<O8A&#a'|䂨#DWCw}*\^_B/ç|+kzO[Yh: xAW99 مִ,,z,^+h=O*A~]?ϑf1+D"$Y2cI6 }&SJFQ~$ ;d ̈wSv&tREɅ@4)wٻLX xdodaa S912AeswJi7}\`W<,wLG̗m4Y ƅ${!0q*\!ysr^~/WwѥHM#Gڝ3p 'm/_yhVEckk6[7-Ջ4^ׇ&dċǽb 9KZdUwY>W鵛ڏO(^x-B)||ϮӐw%Bvuu+>i79?:?;x{tc/)pGU gI?>9OjU׃ws8ҾaDi$]mVcqwkbҼv>(̮r.tH4 =>ݔZ]ԻF16i5;^x4BӍxE,췓xk/(8gҁT(6??IP+.qvX?AR?)̫wxv$_]aܢ8bJjd*&G:)ZlEMʆ0Vl)Q.%98fkݱ\}\4f yDϝwKLHQK}GRt{^udD * B8B8׳ 2[/R@k";>ZwD1Y~[:sCIȍ@- Yw͖w ۙqkZαhA'UU3{c7Bļ H/_?0>aL!!^!kH%>2̐'%}q~܇>&"Iiб5 }>Wwn1KtW;|~NWͿkXOOW=u1wgO&S -/ojTmMjnڒ/=+gQQF)GmVh-ZG=Rv%SrBM%tQKhTA(Dڥ i%*"Aj!)nh4N ĸ8UJ̛nYΆr*Awiq8tN`m.eg̥+sdKe\JRJFgNJ_JJSvXoleG7qvpHCdȞ#>eo^M - "*OkJC#m$@!ƋS]2uAuV*&;g|ڪ3үw Ӗw|h)S/t *fqX'cBZ8HYUUFkjMur(}F*If< uxU ԭP#M^+WY&IZB2Z*mqZ9e$uT1K*M cж J!@,Yd6j&Nuj!M eڠ=J|4Oe# uPha> (ij\vo!MWTᦒ Y|.]Gm텫/6?kNS:=u]ߎT&ΛӧvWo.~j۟LyHwk{"4M(jYii|'0# 2L oXHԠRjcxs/ Lhadz6`ZB i%l8$SlO8ך`UfCi}@rn'(vl=1[4I*/Iɣ^w(Y}qS*|߿zKN|X *6,;ۤ51lM"&#f> 3 ͶT2Շ9dEK)zs (N(X: JqV(HjZB)#󬤣vhH+ћzC2(ƽzO7^9O^1;v)0AK3ayWXm q/{q˞jO4[u~iُ\aSy0p!^S? L d,M5YU(QJ m* - #Q?{O۸~ .ʵ/dfɝt`kucKt[EJEIvQER#X2:[:ΆrjU xÓ; m%ESz--rr {^k({{IgRNh6~)pэQz:&Wɾ\kt0%\v>:~2 F ;-^Eug+I&8HH 7(<_&ƻNzdn.)Jz=?q}` TzP_'LA|1 7z~wNQ6zX^?(1j"c5f.J α&m)bxacIH^1Kfڣ]6OCo[ 5ÈW2Nuk~r[Ҷ*}N&t4+]v:{ze_pvzplx;cg=Y/{O؏4xqU)WR,H]< t^n&vV|O$A7}OݘJþJ`$mJmI?JqOnʲ% hMdLɡX`-S!xSTFcYXxD67͌*p+{G:,0`Vⴘd1\̯u?,./aJp).ךּ`oc4n۰/Vϑ9T.А9T"d2oIL3G02B@%?~^ntϙ>X y7|CQ,%m(/m]7Crn#^nj# ׂ@bZi{ HBN1<Ϥ9J T5VD {cy!Eh;39'l_23|噭u=sۖz陵UR"meѸ,q!D]ן5n|8k;9=搜MK {7 lSQgēS2Ҟ=WdaPѬ)1"\ɍ) .G !kSN*sFsfpN9\c#V/g6#&m0yQ})=3&Й ,7tHR42Hp#H\!YL/%pIVhA$rF#CRpvS)KAZRQ0^5R ,4ɜRW7#9onW'S.j} ?oQ3j6LɱO a(% cQ7&1a~F@t$E-uQlFEProfKgڵ6 !DΪ1{~㼚NPQ_"A ?AcX0fHxeUpA zx_Urʀ} B%=|w&E `'鰗/{WjA=OfUPr׳Sttl.w>E+Ϳo[3: rŧպu`4U0oVH.9v#o2\ _>Rx2ooZ̙C</TfMݕDʼm3N}? :A'\HG@@vJp9j[c2=;J1fuF(Y!ry)ݳ}d\ƬSC<(豰H{{~rz̽w7"g&G󑝵̜{[uUq&AaEyODUt>F Z>CPlZ嬟^=ҎzObGJ']%;a'J_ʽL2] peӥe!`sx_Eü勩gM~o"WyD&M͛h._IhK!PmmuX Z8Fh(XK?l~*Xp]+3R,@j? 9ޕ#xM:"g: +Y)1J Ð3\〰EN4<;y<=$LsKP$6"T4Л(ڽ8M)4RJnU|-2F#,ѫxdd $e)'#,W(Q ,7@35+Ҩٸ!G@v,~֌ֺEV1 /fFHVa a-p9CLA{LΰYU+yQ: #n&=3Sž|(Z iޯ8DE y})BCdϦ"_;u!&Qh Zis: MؠaR4 xߥfE9D}]CS%g\pЎe莲  :`KMPȺmкs㧇O8I(izg2GG뛹 rR& ɼC29L&%DhHۑ\ ܗ\-`6Uftܷ͎1e]ϥ>sk. 1{'9DFَ!l^D{h%4bZ9.Xi9bci6l[a IWidh7Ewb5kyJLo{:8 uHp5THɭ !$4aJfj r ;ԓ"t1H O&H1ϧzYvp%oh@7W+㢃ܺC IC{B1ꭡ0=K(D7W/0%cM' -E wzI@J%jj82EPxC~$FB\к.- ?Q!8z5a~2So6#SŸѫ`@kTzd!NA &$)2.yY y}z̬t|Rޱ+̾ta Vk݄nZ7aZeACf[e(p`ۧwh, d3 /^= _GEIW' Vw~[okh I5|^{ xWbR#g:ݐ-&]SӐPL{tGuv5PsOtJxnx7r|zA6;R@VHrdHqZs0ڮtiur P*!Gs"ҳ P3^jC˺"Ko,w̿-zDz7eP?&U=  IENWfbgW~͏-Idӗ(LsKyr~?VVFEagu+,)ZȔ<ՓdKq^ɧ?僃?,Rĝ/.WQ~\2܅\Ƽrv 12qA琏yC *x$ ^k=n G㹝||PΣr(_巕qAc ^nXSW xnXLc-ST]|4N;͐b>OnhoAK(lsA˩&:Zw$B8RSBQ,bIp)%=M s䁒^дѩj 3UiI7V-4bz| "y:xezNt -T @3Z޾mToȢubϯTmo.`)aVf{X~DJWr;[w0`w:0*uA!5!K)6? ;[7ϛ 7A>o|T:\JCጰ%&\H! ! 38'RJ?xM>`@á[Ә)Y΁>(($2 [F-@aj4J08̍Ɣ[m1HQM w"*iLZA*9rD"H^)Jq' .{hq,jn%\Pp8{$O/T㲺> %mHhQh5n͈#u˥%/g P+[ FG_U賚y]rqJ#QCjU=/%iz6XVuK:2 o9U"gR i$H?*]B)<_Q=hڮR=ً8"҇iCMt#a=ܽ8p;nK1u/2Vu^ݾxyva[\E1j\͢ꐃYt@ӭ(yv">kHvě ;+~U$t^8bqÇ=:e|V7 nj tN:āFHN:L* :LZ&͛ 7A4ohTE&d%g2' ܫUe&9-0АsF !C&PRPKDXT`(Z?msh8 AώNChnUhD`LzK3PQ/Ƚz3Eb3ٻVrWBU5;"{d7}mYc"=l:jKO<u?Y7V )p#l,n,#&.#Ҩ֩v=G} BCP5qIjԒِBc۷>pRn,#%.c5>F~[vm$s?%ã5F mxך u44a\YxB͘e_D~S4 R~S̟6}D}.oQ`ڜrDiZuU^^jŤno<T(^\퍗ظcL5!̩0oi{=.aջyxX7z͵}2\4ފa8$cgXqI[(*ijqϴoy` 85݇MVE.H9fJ;;2Ɨ J I4m@|^dlџ enByUҏgX'ur7 O&$Dht"`D rZii r;ut߻[SbkI+{Ch<S%hV9ikDd^j nM̱m>HrEɣA3doNVt8A0陥Z $ZK @SK* jZ{pF`DU'k+, ;z-e>Y0v1ĜmyQ>\PT% wEC 䪒L:5t9"ǵ6Zrg ,Gr0z1 1/PkL#&5bU m[G@(g$oQ J)RcVQb(u8lE\I_K(Vc%\`U"F %X2jɭjΜFቚmXrdSA\럳 Rn~dZ71r'h785|vC}q]D[V o֮Md m=Q0,r57~/'IVe #gٖA7B|z15@5sBËp# WxDLjn"nZjOuԎٸ= ؞o8<J. j=Zq $-/ʣOKPV`)\]ԓ3l˸ĠۤHzȺ0*\!~R]W3Ց_>/.G@U΄-Q=#Ho#H%搝{DQVEk8n~T| QM J8:Ť7@tÏYD/໧z6 /owC W{)| -oP럋lqDoN<fg'I1~JQ5F//ײ nj_A0H#D LR-c I߄ Ւvx< LΙhX`6*-c,2= ߶1 ~]ڔ$+m'񲋋\&0*4o@Ԃm= ]U*a}KiUL"N]5<,ngQqs;?(5Fe L+)瑣Ibg~h1WG!"<yq|?KHp=^ )1plٹh;7cB!@殪5i5^\}yde3Z6_{c:Q@D53 2SMͣpf[X; Л2=UVK..ɥ,ɧ.!!x\}2#2c rҸA`T6}cI {&adsZm~]z7oNg%L.=!ƷմKv<-g^,LG/e[S-/I~N6Ѷmg/Z/s3Xm}M$8EkEk?-+uK67_N_my:N G & `%<'< "Gk~Qn} j0.Ǹ j )ڏS&;^xϙL&N.GӾs$of AܘOT6!sɹչO;=95;5Z<ܢ0[8L` G`!F`C2N+̑'qz)V̙݌Q5OkLdv9:TA>'q-a,Dr1B,k"LrOZ)r{RX)x.IJ CYeRoeIj>Vgڂ=$OuPwcYJt+nMJ/&:EZ|/UidRݘnTa?Vۅ\[/]1XYmGhÑY{퇳ĞfvDf~Y}xp1ZįܴwS"M;܌%=oMW0unHҡvsadA- :ZFC"N|_X>ȩ5:nMqqіO:bK=(3ް;!F[8w\,A6ƒaHI!cvNuqqu\xVDed|Л4C{z; O X)12 xBpcnL:FbkxZH pzjɶ$!7e=49"ᒬN'7IA2Ƥ1w\ j˺}M[/r".+Lj@F5@K%D+iݴHs8^{-:Q{6qxwaDIC9b戽&f77g=NJC5 /9wm Nie%T. wCK3ʞ| :'ll˩-Uef¶t5p9x(؇jରC y5݌2vJo;(WlТ/ZHѐfjnU=1w~#Pj޽qTʻ8HESK?e/]c.=5B]_ޙ޶;3\ Fe>h)SأՇ"Ŗ۩qc$KPޑ4=R8eu*V qZfqۢpư-cҦzzڻq{3EbZrM).Oorld-7quO~DS~8>ރZ:(%z/Wkc-)C gڎGx޺mO+j+.7_eUvUIڐfߟ1\F@C]u;F1wg#%6Z掗MjG{њ,m2ʞ#P, \p/[ԥU' $^R&.)Q`ukVVVlÛ>JAHI<9n[o9UǬ/{Mw?9Zyi\t>sj(krs!K19]]"昭Y|HX4dF(x\d_hIRŷ17 :L`% () 4.w>@ U ¤dT R|V,Y\5wq9vKZāDA~?nhIoբVT:.xTٻ6-U,If$xf dId3،Tۆ@.N$nwwuW_թsL/mPl)aH#_ +oA^V#2sy'”¯wmŦ&x@)4fE- I3DˈQyg 8V{1JN`=g  ;JaLf9 Otc7 ˾r./IR)]nQjSiסՂx0&P =2CTԟDk"JhJ* ?h;Jd c0Z>ѣrRY'^gGUŎUaBl>I?lJ#G3y9kĨjhޓQ 7?d]p"M0hy:Jc1qDܰ;Ȱ Ÿ᠚jF(iCAZ !1y@9T}AΛњɘ&׫`^6j'ykNЦ6PhKw|ڍWsJ |Vc֛_K T=Uʕ\*%y/'OpK -lY r5+9)#t5$W+5KPhߒfmjVv9ňM{WވJe҉֯BǒNޢIi#KtA66Y.Ҟ#ui5IV__l]arO$L%m4ݰd"J;8G5%DeRzQ7`goiNn8>|wyv!ŅRè[ȼ&zj(WZ״IεXɓa`e.<VAoT~l,Pa0ÝqD8d6NW\r2_n^o۟[ /Oӛ58-ͭ[aMW9~q*8&.7r(2Mi;!ъC,lxFFSpi}rq#+:a%X~nC6^]攭7]ZnEeHq0QGaF:FX.H9d 'DJ@@O y58?҂bƍ#lSmmmkARL;|_xDz/OTn)f0ͫu蠟1%X7LN8i3 >PDW>%0y`q[]{;m-bpLʊMJC{ }Sfuз>E [X9xưC {g^#m|Jl +2ꀣN H,LD),8D+,*Ԥ0r-vE"41gNc1"llF-!FtHhEf g#2)͟m`{O9oF)eIK7!FD1 =IE(R֓Bّ'O SVp<8 jZ2qkWQ'))B'*:p'(:L wuʒlBXAc`H:+ E'Pߤ#@QͯPWW>-צ|⩖|f7d7h_:c9}&@ %B7A쳪a1EN*, fi}0[TɊeZ ))R% hҞ,Ҟެ yC^ n. KNp( ٨m Z'D+ b9hcS&U%uE%.mXg,X)t#V&)+0Xp̃Gj #*X ۂ`#m0lC?6WQZTF( ~jqpa DEZ$ ʍ"P- a|:#9 ҁ5_ UQ,|*7((!x?RHabqG q }l ? t`8>!8lO, f9޴ڝwۇw6XE/ǭ~p~~m}gr=_?vv}7 ?h΋S9jͿ퇿'anGgooo;gg ;h7<~;uM1RoT(Olϻ}3Lo$i/kmEtl̛O bpxV~z^~[kˡ9f6{[3 Û%j_c*W{/ø˶K-M1U~P#WѸ<KIOyyQN{I {1{kz;gh?ja<#84 H~~ *w0ɹpxhpMM'pq6t?!x1BcO)nMྐྵ^ȏ w? #ޤ:ȇQH;΋@ۥg/̉oţmchO?vw#ް{ ^֧৺~OI{4?L>׳twxS^o弁șTkkjxg&' F.W= m/@geON`~4 3?<ǿmꊁh])3RgVqtvagqpL)4~?0ϟNsv]ޑ,׹8+(S?s\&[F:jB.| V;/ϝ/ʀ >08|S_yH'pM1=9Ç'O|ؚ sow.mgO{fQ0Lgz-!U88 k/E z0pq^vRN~)V?][?5^aGsfdd XQlDf#QF-PE5. $y-<טQgzգ9FbiQg] u};:2FDц qt)%9 \QkF+Jq6`~ϋooo+d d d dE(r/H^"U9/É0^ ysp~)8X!b׊(`gZaBSf;IڡYm/$§wkp@Dh0!aǣTa::ǂFu6^Y Jۯf\޻4Y!{}KNpH32Xi.0E L9”E{G \ X2&038e#!WLD|n QcA/)Iyk5Jq4a㭔!C/y,͋gi^<+΋  (ၕ|ϑ@FTGT0^)0*@Aئt%*$Nޜ[`D z9,q +-eo Ty7of9,4kSy^A`΋^- o!cP=%2Mq)c-9G8ρ-1&yΡstꦋ>|غctEFג[ku@$Pedv5( P7*}%ݨ4w;HPu(LK59AeHif ahJ<)N#bRDF"`=6xALg1r`«2ul=JFwCΌd8cΈ 2@#xJHJU$ a; d"O4'1)B}X2yQȵw"2KF.tF)-Wa^$bH0dA0!9 ύqGM@y1c&0$rCE\ZΪ.Qu7ԡFE|+Z,PfCPP3BwE_§Z_kIp#ox*F!c+e] gsw e) edWe]'&!ǃd+AR$t1ihC(_kB_"qE$TD"KE$TD"+(UfمZXJaE* .btA# op`A%1D/T4+dfQ!23 BW,Qs %3[~L79i 'G4#<^Rskm#Eº ]; 9gVHIY_VK[ݺ8ryI"U$HK;8}w2I.q5dU#ҜK^ )<EG*CՅNʯ z~*)zՆP4/kE' mCX7>yطqr5."g]xA@rpÍ: eKzE nCXyRި(b=J-7 WVPу VJi8$e.Q Z[PJ#* RdK'GbϽr{?ÝjjPK u0QE}h;-1p,E$Y'=&Qj/H!JzLT#(">jQ!`څh"h%@Ha10RkE2AD9Dc&ji^.r]cb( 8:i<4/Әr/ea&Ub}5ĉ#1D/<ٺ^wk||*e$R Q"pIsN TL$хS稊i xJB@(6vst'?v];tko >\r'|?|>wgAXxA":OcQ",^uZk8^# ۝8To۸Dno6V1kAq8m69̻lu[lɼp2Շd+=Df^Ddkysʯ?j ns{!N=GY*ctNsAWka1Hb'D[&>}&VFTsp,y=r}{[/:gev*ͪe"G+2lpI)2l/Nسk,hqBZ-Ѻg5oQVL5Pς!%JVƃcp$O낀c66Is\͝ ȸe_I ^>\I9Gׁ?U,1pp/s0Zpie^Q>Jy9O9SJGKeJxn[w򁆣H82mM7?t jmM76ݼt:X'?2$)XOda5"DU~dsܨdewlnJg1u{>FPa0/\1ng8ciD48X|BkHhq>Ns???EJť4)s@SR@tj+d< xn}X*jN@% ُ/={ۄRT?Y̱{}j% G[O~p@cA6{W\ _1l%0`Z{Ed)|k~PF,~xDǎȄa09ވ™/tOԏ+E12_b[Z٣X^L/`;>B1Y8fmڙ bIsS^ugğI uzm4Q0QorLR)?T?]N?>ӥ e2#&BZ/[e=,Yjh?Iİr?*) .~c~^C=Z_y5^?}~ǎ?vfO{?V85Y&@X ] Γh*DHTN^]5jF+:(ɩZy CЎ)@W:c %;Z^0Z;qw ']r(̇ˋUDIya+PJ~zz7(>R*b\KI!:BYA M-ACYP0I}ڐS\ .H䴻ojo+)CH͵Kr /"B a"5)$C P&U8t:#52ϗ4hυ I'&\ARScHRKVzWHG &5S|Pl)8iHSX CS :eЂx {u8ge$ŠR,"{) M^ĠVG]rL^n↪;CtKzf839vgIhv W »_^/q*zZFΖ\yrgfC_b\ׂ t_ZmDm$t:t.ofayEC8vՒS}XFpv;;?O.{1qkˏS' AuLx;cqD3ō^]w6Y:Wk?sAg>8kks+zL:4BAK=-VH'uhB 2"tbWd2!,j!B4 ztvEoz10ir}vaI-G3Ѱ".佦E$(/82ѡ'S`uŻC(c$#zqDpQKP "dbv^\z- TycEOmSţ7ۛR6-lQwl0=N9HC}'ЙW7<ʄAʋTuĹ&c-h陊D"hHv&d=ArG4B~xєajhrBP39]O` @7JAEj9'&N4;HPGz>NUTI~XJQ,BkgqҊcaS@G和hj s)4ƒGyo`qIX1.0F-6aSc!dLBvIv>_M@ ǃCA0oG"Z8^O$OM,hwe|0zi\DrMx4*mIl$ڒ[K0.>o m1~σFGuzр3E鲅 D) M( 1NhBaQh(OSxB}QE*GW2D͝JEp .R<rCࡸsҫ\1~)XCʤX]bxH3VBOTB3]oǑW r]Up'_ )1I߯jjw0!CnT(.7V%lUzqO䷸c WO0eh_f4HH )5zlJ6J#{ g\*Z۰#i6̼uhC -fl:D{fcfS(< j[x܈r޸wvY!-/םj615*袧 Hl&tCTYfIBPXp )հLS v;븼3By1[? Yŧ?k}`ݣkvyE;M?z>cW2}B>vMrg`^G:x쿈=5ʂ]*ixt4Sr3v{.ʛҚ.Bpm4AF 4ʩ`xڱ%̇*Cxs|/+wP& !*eb$2Jk4$?IF ʢ#JiTrZ]á&tbaxОXtp?K q}Җ}qov@*vֹّ %xd}F$VZ2w Cl-O%lZlPkdz Ib%k))  :]ZϹRbjCa;1rgɥLRIUXurl <1hUe7\J^@x\<>EGP C#yb_ C$Cvς4MP ڹ\-njGՍϟ22>|- "îsiXCzYܞOfW(W* n0)H̲ap`HjJz%' Lh2\8*{Gy-b+*Ro&dt%^f/4f'*JIs"cY`X=(t)gi4E19|g~MQ`猽CǾ'_ڰ=vdM㓸 V yU +CRstHZ ~JZ"x7rDwHC+H}|`Ț-N2@FBV^ͽa1KP3?4ڠb'6 1,>:BJl$'1,Ӱ' 6KHc$>0_ShװDޮE$F 3mF?,*8J lׁD %+"qLV]X!Y*b7Y3(.HiCu`9+ݓqlp{G:EPYpJ઴5(E5qRn 3dIf\֎!RUD+%]n OG2T/]h`RXJv4G@:fcK!1VTc l~TvlMF,Jǭ̻ݠxX$I[mfNF3L& è0ו;Bl8v~TmL~Ț‚$תqN y ABaYGvlV@#ѓHφ\ N`9vQ7Xߐ\&G8ecBpVH!~G%d1W>6\$=iEVn&w$WZ !j& R{m#$m%E'066Lku_R?QeyP <~|Nul gW<H 1e]f Ic~fjn+'$!fJ-Z4PkJC->. 0{>Q;K+Ml*8pCFz٠yprQ ɚNiНcOV]_ {|ؚĆ=+d7zځ~1rRӤi *+:iYT.@J4H3v f%L/$x;& I?W*6]Cf%ڞ%ݲ8-5i%w HRZ۶,Y =;Hٺh>E¤[K:>CgҜU>D;֭D3dKFž}dKֳ0!JJ-mCٵ gAtVj5▕wuENdJt=Ck?kvXOLe8:^.Yt;K6m6V \\f+$"! 7ϐiwu:S>8â ܾ~ [M<,Od2Ϭ~;yklι~u fp;/щS*]'=vcNaj+~{~ _$ :٠k3Th;}q_L? {L3qĽ?vWCZ&iJ7bh&stJU.r7D'jl 'se.O>w߉S͇k-[7Ocw7gܼ[ a>/>`٣S@Z/dn`sAVӕ#{>79̗o9s;|t sUVu/Cg8Pƙ2ζC @I%46EM9G^7)Qes hcM&'+Z>|v":f4CAǽx&f♘gb&mщR+ZQY%M咜/DYۤ}Zova큼g]٤G8v)R;=8/*ϝ.jO=VDV4S`*,@2I$etPcnb-T֑ 'e.0`XalXt1K5>n(9zoZ*HωR/6?-x;Ӫ#6ۥ-&v5}_ Ǒ5YwWGhy ϝ*]+rim\e^c2UFlj]ǔo~|7oj;~\I`j! j;$E~D iYaD@6h)0٠d `PA&m}q窤|M7W cRp>Y5 38vJ"4km?䅢eDkTپ,M!2rMt+206U2R!,rV< OǞfܢ!)' +C^ŵ^T*J-B*ִotHӥ=>>Ӥ0m#B7J2X. DnIIw׀-?+FvtƉQwZeU&z>I85 .V̘5`7Z3[E,gc&6igV],$l>Vα)~C;rQ񁆇<'Ǐ7If$=7]A`Z>˦Xvqm^g'C.+bJ@*3 sG%=!,& 筒BsA>NR ^9 i:9~Oi ¢5췐@" jӃJfuJ d.v @^!R7*x;&S]_p3QxS-=Pv*#Oq #%v bʖ zɀV$IЃ$ 5Fȕ[쳻 "yǔbA)tѯM{wzͨ(w D:E&osFbzgyzX{1ɛf܍{ގ.J^9 HZ 1H:뭳Dv5ZIJhC,*ԡ zo2 MrmF \[Բe@Zl` {젍-`*_3"d; $TF1;ܴVTS9f_&|.8>)y uOլ?,!vz]a1}&da|dv+Vsh8~ݲ[p/6(͋B+%ݴܴ'W̬4M(Q P𯴚Z8?l= s y<9=KT /6eu+{p ":%0[*E29O~Fy>uvsy!kmg/}[ _'CmO5Wتb||T:nRX˗SpSتWߟ,a9:d_7T|x?j~yG`"s? ;On=ÿ?=r=ɷ^_~Wדx^/>'(y#gS妤bb<^LLYcKLL1=MO5!w-F$%epZ#we͍H02OE0#&v=3az^BjM}XDQ[M}*ėW\0'igCOװh|zeCѕ z8ֆ.ѿ BսWKҙk)$j8)G6|B83:B$8ȰiM\i ڈRKNӄk3by?dr4Bϫґ!~ uB~_?3!J4CrvfT@K*Nv?lw\LJt`VYÚmϮ'j(7z5U L|\t1o8@ gY#WU۫كY). |;t}`WڹqR.%\V[c!ބS;|q>{{/;7X ˢ5zۤ鯧ݼ:Yx;BKq.j\bF!߸SP<_ i/pLKSm50ko8t<gcD7#PL=tE ɔ-_i)rrZ B@F$)r'H~U4#1ՕK> ZW쾺TOPA')FhqBA҃CP ' jFO‚TZ)EN%~~DhجFٻB*&l'ZTA8eU6,;I-"VG-Q0+- HVHH 50zɐ6Deé n,!Q]@. Tp}}{m±Ώ(Ò[CSSAYJJ%޾,'u ֤Ŕwm&!X72F;ζd>#_}H]YΩA4[D*R XՋ>+/[1.q=~ؖWK/W:dX^b2%#)_,nr\\k{ɮvK]^〵T7ur ֫+sH:ػ!eԀ&U7E4mdǬ윐"W0E<r@r^J ϜrXҘ$K4s} BcyVʕ(nl EQ:I(rR&7SJ7YE)tY2!.<4cjfA}2ǷU- ؿjX8aEkr)(z)ρ2)MY*, -ʛiZ*ǷU-q փ=IN8BVE C!& !bFFO\/Z##߈%s|["m(p HnZ6z~㢚APS쌩ΑL1qI+ +~!ո/!;<z١IauX˧{iH ٲ AV?]^ը/8ɤPCv4hu2s:;%_F]C#kQN7mԯso?_|ߘo:K!^Y5 `y?h-![0I'Ry%kV FlPHo`r1 'W6uf;>wZ8f0/Sr ILp -OZ_V! "E'uWO.e/T%v&5v2\b/geTdhѽ\ZZ-אIhm疪%A be !1:sHV$uKKI)+͹q(q*G}FܐJrqDQB ,7"ȵD[EAZ'EiI/PCcFPH1i7͉|?.jx\TNҬ3"M;{ONiP@-Tzܯ~ɫF+/Bߐ@7 #+<7"fucgМS& É(p^txո  D*ovÞԯ>l:]d&C'k3<}7i&D\^a's[&,H腚.xqw9+KR3-@f:!1yv`7kZpہZ/&q\4)RG&qIxI(MQ79a1pEd;wLx3Pɘ@Y(CKC\JnMrB?6Rږshe"i 2 񑇫j(4Ѳ2D%/DE@CI50"YiҎҜ9?;渴U_pPH>j&9vuh}Pו@c Dlv++3*%Dv#I\ו+$';?cIXR7b+G!5]aQwK N_ƚ*x!8/J ꬎ=v;8MPG" aФAwBo(DC0H82ffcjDH;z(&Un2rJK2쳈{`oUPU%wga_vЎE:vD@B}5e+T/2^dQ\( jUjc{cި&nMrFoS\`CM&J Ϭ9P*ϽXύ`UI+ Ľ!ʆ]AC"pkȻI"}v\׍F ݏ7?'E@=ܞ҈+wE!!}}СbY½fi 26M|v=/WC_\Ǽz5z hE4&Hր4rZHhJ7%̼c[ڷ}OmѝSs'Ͽ z5l::aI+Rr&NѷpZIkT6TjMs{wYPg]! ?\Lo:V=J?iCMǪi~h&QԝK0\"ܞK[2eIZPA[>ΘRp"[+k_IGՕ5dCѓI|)v1!2W2. dc2syVݧ[7j׺M&~Ÿ"BdNEw*L y$"hMIVPÑ̸ӟ:K JM.[4[gm1\ğh*/-;Y!)xtz3kC14zHJѧǤUN| 8eD2z&d#+gq}+|-&_@PVWٻ6ndWI펄EUyس*:/0 uHI\RH $\.K4AˋA<#~2EK.sn NG (0 Mp˛̻s5:2˻ a} ~7XXsCw6Ye]0=a곬'ǮczÃQ; (1E&ikLdrzen!Jg֘H tl 6(ң NXܓn8:buF褻ƨȍfgkTEE7HXjA C m b+d IfXHuﲤub6ۢ4.JmwErmAX+ٜ@k]+9̶b-"?]d:sz_5}=~Bi^+{?~?dF*=gW4{־}zzjlc [Zuo:|9U׵Y9q XR HĊi/,1!%jE-%MK9g&}=0nn;" ys{|㮍Ln Ďtlʤ&Dr.DtIioik'4ZYl\B#a5֔hF6vȺbFF'&1"w!F[aIɃSA)i@[(|&CUL# QNgԪtB|Kԭdž[AxZZZ '[A~?0]v䐐XkliNMa)גȢ;k2l}nS;$pRm/A}(_ @wW#cl{ V7Aaߟ7$ئC40NY'`oьKVc V~aһug($}o45%c3^M#1!d$֭7dpIGp0z?|`H5&]Ё¡!BV۱eD24ۣ:x)bP帎)V.y(t`GxR3I/=B rŏESHy i-zHɭ&AwNgL[0XP`n$TSH) J(,P`bt.1zl.~r,s\0@R^ "YNq00c?Zs+u.4L]9JnsHzyUfҩ__~-AVjN'NM{|;gI ϟ#.?#.?/`7$$h%XK@TP`` DK}y[qgou~g)ݬ@ۙO3HԜ{;EVVuY5f6,k~;v\|=͘)O!]@t::buȋQ]9iȅŸp;R*DJIƝ<I)%NS!ո,`l;P+l;ֻط5w_NyPF_N|έC>6mˇ8+t Z6hM"i?fҐP""D_Wnw^'Xe㘟mڌŦاߞoմ8'; w"NxGLH(E(Bnlrvu$rPWL+C z8%QK܁붏P6qg"F$`۴ƅM ݷ:km`]QXR|0G I'Hx<ԋ B8Æ *Oebhezt[k_.t.]kIf%ǔ g o~.a6AĸZ(!.$uprcY~v76V>zI _p"߬' cy˧w8wF/n5sN>tG)5x"p "@ébv&oAi/.r%Ks a|l8d "AArZ}5L!I-p D7mj:jtVZl;^CG:|G:zh.(51,I)i!֖3  fY0,WvVj5bDgP֍N#7H!A;!m`8 *.  SkM7FM-X$ЊPFJ(JT1 ؍PvV[`($;lɭ;nO~"á#1$jO»tZΦPH6Z_%j5z: \,%Oi;T\=_=Ө~TJD.JL4BJdAFjg )5Rh \ MW@m~+Dd4GcsOmHVKS(ddXq˒W Z4ň)"  \/M`:p% vA"X0 uQpٌob8Rd4B@p2AF2BkUa[/v^bzv}eE ~jzo]13RI4/uZWIzVO#:+d'g'kfҊpwGx>UO}mx-UicŔ%&9K@Δp̟?l\d!#.YI\w $Wq4 q*8y^Z%\4/k3<**@OwIK$ O|W=kS7G\{YCZV!ZUL`Ƶ*>JqM+yH~zj˝|ojKu#`m$ ;?ࠐ Y1]͹` ȕPԺWM,(U$^Ii<8ks͂!YNm2VWv`yp2V{Rtqoymn&zm?7F {έɵ.~"JRˬow8xz3)?sA`P"-l"{ 64%r %v3|󺂄fx iSrNJ {AoAx8惡fK튀7^FiaA w)<,H v] ܖH8U'B-0g)s eG1 v֔3lCSC/Be B,mMn`8ϒC.Y)dCB6Qb"Iѣh e.k&++8Ȉ:c-&l=V>ulFP-LU9xOY4$|Z!wTdx4C`?B[5Bhy޾"PaSEL6yR8e{'RVQz 2N*VYߺL̍XOx8e'm˒;<)\\4'ъ1&ՆqXbo6z=t^2t@56#q,NJ-#2ji~PẹU!;NLj@q&v[rMJܒ=7k1}NY:u=~{8ÛMf"JxwiotRiI˳W'~<&}/ŕ_S{IݝMltv;_OϚLr@Dj6cb%/JLhZ:CEEXsS]LnTd'")uu#@u P$(}v !Kn rn] G38!]):\x=f0 2ZMHd؞ Ȝ Ac5KQ[!eL9Bj׮>"-Rlc,0) b!nL&GX$9 Bt}N捫8c$O2 a0$Ũ Bq$VHvs%1΁i r[Bq9CaX`j9jMS\5Ϳ|÷Y=층wGW啩rp2w_N QK)TD(KH#'9ym>msCؽ)hl%V20RB!%MQ*02`Pstp>s\;+nW+J>l *W49A+H,pK$Ds(Dd@iՐR#FJشYAӽ-+ϸD5uӲ6(9B&h^PQso}>-^e}w9*yZmT?ęh/zѓr{ zA|7=w^<܍orm`0h44B%5`gBXLj@elV0>bW2$Yہ3giF݉Sde5Ucf˲62zG A6mc 0n'6Oh@)H<>&;vy7InVʃZ?jȫ#֏j}]8eL"A*  Pri1&@pД2-@% (mz{Rw^i@1^KSn\{8s2dɘOYRkqv)[Km3`]۲Tueu0`CR(2HgJ{۸_x[xMҴ7Fŋ0Yr8u{2b2Nf#sx Rʃ4T 5 ${'Aۚns"=x+ ULKk)ںe 5aQ.sͩV5IEIG((iFIޥWr*@RsTCI+IT\@v5` ^:Bv"(t ëGe *0>8J.[Y`AbW A0Q yWfweih֥< D 0sSߎ(ť+.4ˇqG|;0cl<}^V*B^'ŋLy7p7.x;TYP!y}gx[\ft9io &׽d5p"}v]DlӍXm1f ˛(h*5͠ݝ(~j&i' Y&3E/MmjVhxiΓ/F n:¢Jlk-i?7̏勛WanŚofпtuoP+EĹ;I%~  * ~^=}>Ao3of?0A0I,x:awQԅQXޢG7H0`'.+hɃP{ ]~Hץ/XճYP=w=6wQ{,ؖw+7n7N|us}׿uo0ehX'_vA*pv8tH]zJ^ϋ^ǡ(h8L(տ{?<.SvËKx/.M0oҔ¸(!Tk?*Xtryy2n}7ioϮ߿(rUCTn`uLjL[-iq}+OaJ¸׼*/Ar^jl2[AؔptJie!A}*,/_%I.ɭ>IEbi N$9jg_<|r~h'~.fRҌm8?Igi1/1tV1c +!:ƀ3eeHȠs9y=$Vac;ʰT &''}YCNih$A%Ϭ#C*MepvK%ڪ\<7NU08p4̊9j5O5.L9|:WD;1bƌ8ID rNk'R\%W A2,Az:ğ0F>|7 x@a5jrK Ħp,BRtOm"5L3;ޙM3.%J;\O0L->k5@!Z)b\)1:#X Tp T4%4B9iBXJEbY>l;("Z-'ɌBw8 木5x#!T*(H p Ha.Zy`\2w9 ebM[8^Q7͸#cmorGZOWJ\X"34ɘ"ք\:mfu%Qù:82qI q)\J%I_hp{.u609S.dT9 zYIF162ڊJ`!Q,*q > <- H~ LQh$UI sB$`#CP'Ik;jp.s qŇ# K fc퍶'';jf@ײ$X7ȴ*Uqcr+eth+gkǥgBQ 4j*]E^UOu,B`RAj!F0Qnт1ǰ_\EoBt_ BQU!|{LHjÌcE1Q z{)D1Ad(h!g qeX)@x<сJ% eS{+fxrc@ծ#M;Ь R.uLV%Oɪl5YA@]rzۍGE)ᰎ(UWD۷Q4~ ٍ.:qO"Ct(ME.&)dmIrU"0U^Pb8 eEwˎXmYU7kSTxB=Ĥ>/%U!%>I SĿ8NrlxZ7v '2Wgaa5BZQ=n^XRHԈivf+nAbXu?JkIX?(NAt#jiJ-JT5JDI(,:`-HS$5BЇ\qD1GnI9>&mq /u%:S׏{IA <ȟMOxo~Ȯ_ys>Om3 XElk8\N'sLZ1&URg;m&u ,~iB>G\63;ǵ@.EKx1+RjbMzq{Z.b2&3L!2(rF9Җ+KW?no,yF0 -rQ3C ׈VE|!Ho4,` |^!#4V@b9EsG<3H @0#hx֠[1M90,= ;^;bɴ ,&j)1X{ܚ41(MMPƓT9bSm+&$AG3麯' 0M3fRB8EkI/Xu0ebAwT(8RIW{2(SSZz "7{d h)Q">fA =Ĉ:H1h\+dVHa1_ĹVN+2BWv2bu,ݻ|ma11Qu 2#pLRgr@ x̊]*v^6p ] `t"i(>#Q0.ГJRo},Jt۷US7u䷞pQoR/s>f!IF9j";Hm:V)\'Ю(s}kj4Ar()ruJ7= 60GeA'@IMtZA@]6LnB/@pMIBypfAWVcfu ueXKJZw`'eID\QP㛩ciovi'0D4F)˶s*8R>||ڋ''=Ѡ=zUMC Rpk^W=} rJ?,Ϣz?+QҔPP+}gJsCK#σ@#L \ ͽp*k|t݊j%]/g.`$[:ٔFm6zPU!ܤE~MY4ge^{ÄBd Hf,JRx3(o(pE"l<"genYZݲn_,\!2]k~ S_z-f_!@G0G~\;Jhi_a`shR=lSJ[}==ZY@9g?fpKpѫqĴ.1Ib%h?9o h ުi<ncCj}35^/u>>&Y -uы}:Ǜ$]Wpjűa\qmwp~ V X:O{GW=OZ=2)[,\'AnV^98Ju?p9q0l=ʸ hxF2U b><ٹ8@e娴oSC[Tق=3|x#bd%cde5#.w,tȥD[V6c JOc+9[ !k@"́CphI{C&7 {þB +QQ&JkMnŠV #1"ƃ,4m5&pBqSC+S>Hٻ6%W,? N6y:Yoq#ZrjF=P\"caUWWw% KBA"[qKӊPj_+2yE1%AE tCЎUU\*d+检T̹H27'|uR{YgN" (\2g5ȅuqMI vBY3`K+FL> Lʉo-PK9fB0oԴO'@ xZ j-ZtRJpY$D*â1TILyn# Fx7FvG-^ؤhwQ oCn1E- CoT(+oV`GmTP\ B*pv{`9P _EP ї+o"%"]Wg&u>+P % ФPf=q^?$Bjw3. %qc\S Ĥ=qX2Mq ȱ2J㜡&'NmOAL8k Y»SC9~qӞ8?LjvPAIsg/X㧿PiPѡn2lpu>Xu)%@Nz  !gh!`Gב y_ȄmFjS 6^H BvΎb[=ݻVFKQlMLs]5po"lcW6_O2m6%1.e:{Y9ɶ~*LfM 'zիìCw/$oyQ^auJ W/@U@n8Ao& "eqNPE0s(yko*s)<40+96GHZ0 $ҙCp4cIFCp D3 m=jt뒦$J5OaiCҜokG:~x.V]MӇe6߈l+@8&AO8[X͠TIt%)nNʩ@PZOjaH /H8Og ǚ*1=o]|=:4@ 5osG'NVHXʴ%ȔATZ+Fu2d6j̩ƨ6o%#gN⳱]d} R,n~Yk&%B+rB=y3Ar6M__SAKa|q"E'+ic4B ־[IR%s}u:۩ [Dߐ7N Ro pQ ]-p\RV!zYexUPōGI8)𙭉S0)ฯƍތx|xcFrT.,IN+' mq4TrÜQkJjW#(Gzf,g駱|Y̹BΠӿGSej8IՆ2!ZAAoRم# (&Z 8Yk~G#q6 EvX80Bo]'g|PDO)б?>_Ga2k^| ]ݟэHz3=6j :;sҝaHDz؝i0 ,~HijD$}uzj۩֖?[lՌz sxNkƘz\)cA㏭SVOaJׄQ US|d s{]|>>o`ߐ>3+c:'3#[syuOڥ[0)yE'>{y=;ٛIe)/ow4 a3Xg߂| h-9ې.ՏFG>-:xㆡS nI UTv~^:G^ `_'gUOZ Y {r&5nmwnlNY Ϲ.Zhf%95c*h=ԡ$UH5K%GY*TS¹3z3J8Ny3q+4KNAAM E|T@r]RAP&rOef-KQ;N~Ù[RgZC)%ċJ:% Ec^8X*S/SGC/$ZGSGo42rRDyQ P$EDCHO (L3W($8zgKVePג ͯK9֪A_K޼%ȱFt%+Uy}-9SDo$d̄КHɳZ"D\|c& 3J֬yBw;+. tP^[[YlG!|BG FlVړ;x)w?0p?n W*wߺ4'uFKs~[j~s]=Wuqlb]] "_][- IHOqY'ߡr"mӥnhRK"bTq4&GU…z1,Uć,c9vZy${3gJBh.MsV ztwDitp]litPlT,,bVVGF`!s6\嬆;{L1Z^7}Sn3N_$MSob"Z%Y4,P)I@MaVyQU:+:ڒQ jO>_~SLCu b٤XMo#߭"C%A5%*6ڑ1+9/MY^:e}ޝW$ 9ϋUh^cL{Y9tt@z\__= rN:`zReݰPjh7(M|LJ<՜sS Bazaρ$j淮 8#ʘ}O5?Z@m;>sFގ}h׉|KΈ7Ni9dgLq^0#f FNf>vrٕug+.[>#.MWﲹj!E㲓f`%cl#P'N?I#.&4l%Ч!v>!.Q=D^ L:ySS_wk%83ڌ1;eM◇u/} +!ey+M-v6b@R̹܌~+nC/GPU뮵{'>uZ߉FB+CM i{/tbقaՔ]N:ia[r~\kqPI%Lb2*DNe*@D$Yb@R >Lj۩z&3m\g2otG6ro2,т"C%,%r$h%vÆ+˪*26ZTͫLJf+Kه4̳#^@x#$mH^TX8$KɓȘ*C|S}򶺻1&c0i`BP-'R9MJ,XOutW4fn{5 UW̘4jO.8UDǽԘRJ@Kηzu1(TR.4եыV.:&TJJoBmbj V2٨UZ-x)geכAnQ܆11|+`*P9PhH|FѠ<2O_"c \{%(۶,H6Œ\\,⩿ZVD&iA.VEdsp#jyJfT}B`եHT$CYC3Y / Wt]Y2|; 6MG~?qc1@6Im/p{N __S\ GkeM{A{{W/XΌ{|?Tq[MJ@6~Oc/ɵoI_>ZWB5"]QTTG)fbTq&qBv`8i>[3j^r2=HFR|Kʀ4P TZRU(BqMeZ:K*cr\h#qr0RqY y@@rU#tFNbZIGKm孠Bm 3"Rl<sFWqP4!@+)LH}[@L y:؃4>J5zS;32bq`7B3t0" Lꗪ OM*Ɓ\)|t0M < Jl^;se.p0 0$xny`*#6JfAɂ+ψR8%*j3[I:qۇ˶($Jki\kAyUMc{WE**u[% #0ۻ=]xdcЌjJ^ѭh$\iv(ŁI]n#7A/d/3&A3I`ȶвNlVvfuHVHEnn{ZGtKKxT /eY d^YEB-&2h n?ǎ5!`,fes<*aƣT}.5A}1S5zwCک m1 Xo rZ}mo i.9.B0l;?k_F]VvQJ5{wLFmP5&E~g2vT3{'#Q6q {=u%R)xX>aH{.cG__I-OlMXrSf P-v_FWoUpd]{5LYuKצw/n~i:`>>އOOY\G;G[nqD.qPɯ5qjo:+DoWN]u%()5wSs˰Fyef+8 |ـ^݇l,Ow`#ZnC]=N& F/~ =ID8bLPu/j)f=9^}y#4cNHˇ/M5+VgYbNt&R%F B[pU6.C^Ҝmq׊}Y-WA(u \+ IAr@&yz`CGXY17:\_*!K?qf6^{`8e7<ر珯CnK|֢*~ϠX럽v)D}n"i " %ڤgw5q U*%/vfL#]!k_yiIwZޓ_ rj4P(OآL2&SbB"D | 9Pi"Pp^Q\ktLCƳ+&%~ - 8ixV #; Fůݯ@DګH&5Լgea9{U%\%\-~RƓП}p+05zJ`N{k|fJ%ya>\m_[ոUՂпʥHeVuzShTt _od9f$˕u' |rO}ɵ>{R(fIit\D2BL` i *`~r~IDA؂.G?#, W0-JUQhuc+1]݌ytGvg:H%{ R1Nn*nfd 1!r8)QVIjҖRfs*,`S\qaL0:+NRik9ݶD1$,EBJc4H!$FAkM 4Z¹ȥܦ(EA@1DG:u8)jG0ؙ98~QM/MFA`"&0$ۊ+Mi-5%^8_,1Xcz(Dpџ?op bJQ#3LR f:@/&Dp%{Ty!#! Q7WC}X4}&UpAmjD"Ғכ7Hln .qSįJ0@lu5ʵ])IIg" zIFHjVdHlmc뜵 tCg}1yL==CҎ_(+OwNj4VQ Z6֡lI) £Һ$ap8Td"5ejaˆƤEu(x]z&{PNrjEk[A UR-MPu W95Zu NV&ɉ֬KNF=BK)uKj:ज`ܩSe)ϧ/ki k<[م]|Td7j\QTdLfV֓ /.G?LZ.GE}ȣ3Žv˾DN|K%fm| }F~D7`|g(*Q2wxU gx˸E\r\~qƏ}DD\bX¬B'SLl}zx̸ jb`L.ͺW8~+[W#E4 dYT=>H1C W(=weyY%NqX$eԹ}( &:DŽ |IZ,P5Ϲd03Kj;O#_U"r,f϶ b8=rT2Y3ѧ3}'0@+g_ѝlWm$rc>]`@nBɿLOܸZ4/Xx{ {8MˇNwn W?[^D (?ك4Ï,]{=Kמ*K4ñOt#o?L+hÝ<~uo\ŀ 2uPrǥõ_:\õ_:\WU)0(URCr:AmʹH[6r삓;9f !w1hN\<ɖ Xb8xSw6=Ŝv+o'UNsQ1sُ~3.y>v7>, $3lGUZ"wkTob*ΰ⭢]l-`2\Xo,x"rƍLp2A#gHɁ2^hm*dݙ Je{BgH(ۖ9ss7'h i$ y%4u!f5H6Ka.;I04#ӌXN!r#3lR 'P*3Ӝs9= ^>7QchFДAo  51c~JG22+rc_d()F`-4ip`e2?pI@/rFrgt sMCNxRpcA8K9R0hGv `kBoBP#̶^#ϒaƼSFWQ wKյ>=HX'SlaH@˜nO A\Sg+yu m FyCh.A-,Oȑh!LFzQIEֻ^1'Wl'Cn4tq5i!M>#ᯀ+~v݂ZJ_h#lt"G7,HeC[8!^UgL*2<蛭aw_uAZ!WrJf@R_$LD8.1Hδ{m:D1QACVXXU((q׍9C+: tn_.Cۺ@:X|sWz /OO z_3p(e5`J0r fk%G,`+RN;֚ǜuz LRJ1Y!xWV·'AJ@5"q@S%V6@LI,#֬YG8jEQv mPHm!evz֜*6,ӝm4F$w>ݻ}u#ˁͼ=?iυwGv(86j2.t)!ȳDa*n8 r<&ܹ4}NmSU )"<fsgH( Ŗ*<#e0'!%F,Ds_(/Zd=AO%BP%ւ9Db蚞aCE!G!?M^d/} ܖreP;}0oqK@D4 ݄N 9C}pVqp: l? kB5`KL"&Wj q-LXgX.Glt+RYR??;c-Jyޥ9Bܒ8C~/@f}{-j(wVjUkdQI5CCrOkr#oZ8 z}ST.0VWtB`֫\6T=mst is^&AV1ThI%:/jqK0U9A]/)l9UN}73ǐ'v>knәek'8)&R$S ʌMe%\kH1d%bEr$s;QJRB^FG2W,@6P rL y@gOȽ{XYeS3b8@rUq ÒSrZwfx߇|qS|Q>'X"$`䙡ia~j|,.fb]" 벷 l*d+oj&M3 cԥi ~`\L׊8S$A̚QZF[t/0m7ߌGſWc*Z\˫q#s2+@S'kr<nj4:CQ.,-oazWB2c[܌>-LfGP̴1;۟%7tD&ˋ~PCѬ+Pf$Q^~@8 jaNOʫ櫩fź:1{qV(lw:u3@zaf_T{Evd|=!* D%;lDƐ퀬#ʁ)suŌrCB5ůƄu A :A |2 o#l˄qC[pmF÷;^0% QkaMg~3lVaX>S/Oot׹﨡EZ=?sBʣ?f !<}Yt*n>j_5O\_4FM4XӞ_J^xNI],v݄BNs=_EΡvU!;*jjE:T(1)c9| =XXOaJvy0UOGM4Ȧ(}2Gh - }Fvթ$ۻJn]X7 rǺQnJ1xP |L'Mw Bnޭ y&dSNɱ.KB11g4ngLJ̻Sxz.,䍛hkAJ)

O~gS4u2eCn$7ռu|^8z3MO[Vwo޵L&"NHxLDH$R%pI@r8iaq q'>roџP$/oɶ_pc /ғ/\d? (Yi[9ƍĥN_#[_Zdz%ub 'Aاhͻߢջ<~};;ڼm}{yfDH8<}If4Nt4qDA %$Q1I gZR- ) % Jm,=rxf4X$(* 2,,،2!FQ 2!48x|6YJ(f\Bʣ8(D?f `Bcb{B$*KH(!G=39G/ 9M2%41(N$I k4pʰZ':,3i,"ӣ73Sz3tmmMۈ `\M)ƫ'\dݓʢam,yg8l2۽Yn~p? m1A߮Vyر|?H$.E\Jڐ||';zI蛗Púf`b1=.ܒdӇ߲qYJSb"YDD5JKOiT"&JRB43^D'vy$9`>/ѓVʷ@Г(3DZ |Cd;uG\n$U%wUDEJѯ1=(:ЀNe1%;Vfj.6ʄџB]QFqϨ_ 0 m~^> mu\W; -=F^[anud>mm^{z!&Ϗvz:+м_76)NpWc  D?]K Jsk^?7JK(0!ƞR I(7 4fI6 LJ32[>7pLJ\J RD ZӈۮcJ\x)%Frt72H[ZF_z;Sy[3Q;4xvLR!a-|D{Qm3-tK'I,9 #ʛ%g xEIe{onOmJh=Ip룝A lp??/=9cƈ%d+/2YO_{%8'1S?q5h AZP lrqo|h@ $C#\)&&f>O V?Vڬ7zU<ӓN UO8{΍~u6P fcn7 p*J?Bvb`m$.i^ΝkCFkF2a+@C[m(†ѹTx[=s΀ T9Zm)I}?<~#[0gȿ\}suVU5z!,ml݆cS̛źV3b} fZߒ9"ሎ}vus iDY<$3:* pCGsA>?Iu5QL LCX7TӉ]L4/GJ,3I*?㲕DG0̳bT<:5Qɿw|Dgf0N+ʑnC6 S ?1[ %7Ӻ:@L=z}41hd)#/>cqrsyCIY.v/}&}لl'zqFgMOתּתּתּ]o˩B (c8c$9c1bI#EU @f|I-?|*6SDMy^:lgcч' i[73#e}cG}Pst3y{Q JU[8f•P;/xSg|DPWN~ O("I͞j@Q=yƎI.rF ݞa/w5FȫQ^1"D:JWbcƖbCq`D^~:5hZ!ϊ#~N'l囏gJ} yk רu}^}=4tdyZm^1^}mq6ՃkZ5ѨuE `UwDCR a4Pҕk$%14~`Kݽ^s>|hmQ&Oe>5ofo)ۧzD0"+ R( SN@$+{SN"Bgt7pL*]9Yk9A Rj،ʗdqBEc"bF"*Ȃ&$NJ&fq(gAfnݎ&1@jf6/G$5jMQ)d I"IȈK1&RCaN3" EVa+435`~9r~LTݡ ȄI2tj. *Ohs*HqRL@l8V0a7!0Uu?ə Ԍ [ySBvgԫ4̴ݒ-8h/qut~E]18,^jˑ#{`Yjꇒ|\.ٕZ̝&(TKIBt"C'K(?܆&Azag+įPsdq㥢'Gr:#&n'E:18Ds0?H}+˿}F!}[{ti~כN~Ns'}ںKy>ٻ&meW!枡2U~T8J)Hbm4vm @q4VEFݝ`s^/E|/qVf;|kJeғCce^g ELirF<VѩGŭi<[h%":V;:VѩG+Eiׂ<[hy#IjTב`SWdtW죔ޣQZݸ\"W !߸,S Y{w -A$@H,IIXg0V25i1=Gs[`AGP)Ƌ%o:^9v /š<9D`*԰D#6qTL[LkdJh\; ׄ~4sG>e-s|oNu`yp7i&Ka%Tskd-U{:9+v/o!1=~2&)i>Y.eS~2OHSEO5.F~\Dd R]dBBMОpS꯱y2 & ɞ:gPj~ > %OyAZ $\,_}%}ZT^Rz^CI ̘ymćeDPͤn#>f v.:,r,rųbtX,qHb"UvD0PDbbŌ - Xar,yGURtXwq0G' sQ}S Ŕ>-v?vqűǮ[ű_(0CJaњHq/%ؕ2Ђ %4Q60OKՎ*闪,.ȱڙk(U1UD1{0w M }n-*1PSb'!UEIa51MbHHfՖ=+bO誟i$ *!UP [J1iui]ǥc, R:*.,-Hbl$Z*Ұ%DTg=b $2DVNf R6DsX6XgBŘE@T5 ViU UWT'՝Ue/4BJag)b:UR6„QcXTu%*-GexϺ\ C2UJ*&P-"XQ A걗X e^jB1?SC 0s1&:+^SV*$CRh b̰nV(qT*£믎rFC6ߟ>: U؞5 xAW09q]Fq9Rr:t9]gMvPbʺWϯoF$? gE:d 92Mx4|70ѹ 3 go^?9i:ryqmRRkݯ eSHrYLg?SS{ŅbZaWsi3G9/oCh=<,r吗c`Ih/ŵ9kU@jYਧQVo'S͂;(A&Xh"ȟ<&8[aמr-zMn = ,n ߀Q7`[g("#y߼ 6P~qc/8gůs;(b??B?7oe{`fa+F>uf}R7Gqtw3˛ۭGi1j!lZld[oؖ%3.AזKy|s0YmFó(N7:}j?"ãٚ{<8nVGഷh he$k*CҪ10B{°zwjU½^w47?)[ՙBGLlzC0O+SW8:c 0gGݢeTQ1Lq̓8A βa)UEo=_ (ܞx'-/#{4^n_\J ".>ۭ3s0.s$rrw.0lQûZ^{B)H iRL]%(:K{;H!D{ Jg{;d9xbiB? )4|zH{le0h|-c3&K;}f7:qO"J uJZJKrNAv%zyrśW~:@폆5L)ͫ)%í2f~WW9Wְ^JF,RĘLmN^5F)VQC.'\Rv>Bʤʴ]cd}b~P xph^uuc=ly~vĩ9-ΌFT.F/R$b, "wdBȨDiXNͷȂKا'l|Nq`ļznD&A}x1k]SmO*zYTh!Fța7ģ?^6jIP5zȌ\4U*1FvLuaVka4'U! VgڑK.pZ8J Dq/>~)RS0+_f{.r^]~4^xYx2nhlޥfSJv۞L{ egs'&1kɚ#ʥߎa'JKMnz'!s:wV ;{֑/"uw W%hI\ӻV:dN75R侟 ryE:dsx~pWAӧ/oo=z~~zm'0gx Vlx9.%[,bE~J~P:ܨzu҃/O؄~o]]ӗ1,\<~?\?y1}}g g{Ûn<ӌ@Ph4q~pVL1?G?:svЏ8E)SOӴ8 d{"?? F⳯f6L׿. j䀒_ë\\w+@s1DF0\Na7=A)ܞ2>$IFWړ CH]*}.)kRo3_= -DU)5*OAA1=RMNY8 0Z-pDZBZ6žcz^U-R !BK,۪_u>"%-%UR7CYn ώʳx)ʺAԑ %լٻ+x6ۭ3s0jJmS()i^{jK:#׌I JapJݫ*ݣ PMkOԪ\.qR9p{?6nl hݾ5(&R֕}I<+1B"3;PdD5[2uUO 9Opܒ/7g4!PIg}d- usBA ח0hd3fSD8>d4删X9!M~@mԛ{xv;.[Nˮ VJКR*PsѮ,)bH0FP%ze(JDS"/`U N8 X%4 q[cbQ+VQ6گ ?z5g7H&8F)UX8I4F -ɚ\Ϙt`IfT5&L8InK{{n:[ֶTGSy5Ǵ_F?Oo/ͷ#V]|f}kQ1h;Yc(v&KG m++[Xa|ن>T5J?rsq-BkO- SƢmp6ZzB }{;O.&`GՖ1b& 4| Kc̰-e=擱).J+s^xmTԓ{&(cOGo{) }<ݵf]IOngj{ܶ_1^T_ȇ){z4Mɍ{v7Mn~זmy-$wUMՊgÙpfTCk%~PލT )= l|B]FI./T+ 2x]=%ďKb 1j'vPfTvQ~> u>\X kHו2΍Ou:%D(lBc[Uf@N;v[2]㑧G+IMm`T>'t\N?<9o5>Vz[v|U^O+Ra=ί7JkAVҘBs*vfbȱXNXz_.vZBPO'AM-:- &*t^A>a.(sIfsAu|\u|\7d$Av 5/MRR*6ZX7Dzef`lK`\js!R)I^C:C.JHN,V+'9wdx&J`)v(! 5bo3W؁$9v-؀!A`&w͂U*} Ri15$8E&05pDSj=5*wj"FXf@XX ԂMe2B1@ʰ O#Vi*54F7pqB/E樑;fh* KԑqUDa):<3:ާh$e:ӌ`̘MɑuX7Ga!rYj=%Ic H=&LjNS: og`=9(H೏ZH4H^ꦲ4'jȞFf`kd)3 t)NiSqjݍX!C)8:ű*XR-ph$9jK~V˲LS({^B6PrVm %leCA406cdOp21!}=[R`ȞqmZ.`MA(&,@W%,@c$HCHhQ&,0 %GKݲwۄ'%T@RbQMVh'=PjV6,s#HGd7^񀈇:,IҲSkRGTR`, 8\9%?a'e5#i֐?t8zL}%.G&.I)ZuN-y#hZ jN0i{>WCYANʛ)22b CbLT:">M ܨ^/Sg7I6=sNW/\(@",SEIX S&W[DRZ'^$°o||GaV*x ՈdIG25T(3xaIה(I!ʺ"zH 6wӉD鼣,e!թZ82nyï}ͿX EAI[+$Z ~GG $mAo 9c6V+uE}&+ 6N.h [ƊM˄-ZEvMy9©>RƅaBs(BՅ<DV88e{ᒟ[Q]qCb-ruV\^~?;LqKwU:3㥻*(W8U Y|Vi%ezhOƋAP.Ъ ,+;)loW e;:''X ]Dz,JNCK!_sw[}yqxß1.CQUH&bw.[pgm,\ZᯆCoonF{f ߍ/Bv[+u['z9G:0˨ǟ7ΦǁbbrHξpXNS\R3G+9*?j6~dcC #SMKRWt+x7.{T4-~P @fYՂ9 }e~A B9G#Vx\Z=9'8F F 7ًbv삝~`m]8V{G %S-f[:yBϴ jKw> Bb9ئy:jhU,/n^>/" |bz-UPů]`Ge94l1&K4I*!Zp$NGǷ.߷+s,ɤbdS=QaIʜS[O೧uŒEHCMeYn|k3Ε_u]ݙ zNf3(I}bȤufK}mcQMcNpq9e8TgjHgVakϵ+Le).J#/ Axq"R ,K2!IcВ{Ed6e9C2B1 `Å'L4kq#I8T:Dsw=RSG21$h#,RY]8ˬ) %#3&q9NL 'ХߴQVLv(_‰tr |qFPoO3ͩ֙G 1OXu3OLi඼YB;*/T!kM6 +_"L6)'U\[]/Hj&PҔ"p\AJ 9m"-^qQLaX7NQmQx ';qwg~X3nǀ4?ԩdw LmEPv.!w'`炽B/<}`mjUhEZ/}m iv6̞҇fLԯ𝛿/o1¼x{ %F5T-"QQ $ h]]NӠKPݠ?א(+ p bFQdcBQ@Ƴ#?OFc?5/ xgI6/N'>"~ot$xN铐SZWr!H$vM3}_fQoss7ˎ>` uYz]YoȦIRǰ) n(xAk!b$LUT X+% " ~ y~ u^sb,;s2ĶDz۾4уW7oShد;{R'%@!3ɦf1 w0m\kF3X{`/;y_ {¿11[V|:Зt4~y3u`$`gk~Їy!-ty*//tmy2/ A:ب6|^1sJO~k*m1Ww1mgpE'Qqϣ=~`_{hoX? +L{zϔ"S`` 3y ^(t`nԙZx!SX_pvQ9}Hz [s]x?p`V=ۦJX'Yћ+ ` K@w=̒j*B%ti?G{T,ߣC n~ 1: 9EQO(ź4+#*D? 6upJ#[8ݻl|+]Yhf]:}_⋯_nd1d {CWӍ38BVb|X髽,yqFHB:u[?փ7-.XDtU[g[󯌊1!t9aP|3n:uR8e9+.zxPB:kÙ{1W{( M/:o/ǿbS䲳-`#9X !eS*u!)K]H*`Q-vH؉}COydJF٦VaT"ŠoY0ECMaV14G@5:Qt1DTHJ< kbǮoVgt[a8%Ti ; d*]o#W\.bϯ'|%/fh,i.*-BCK&X4E\/Od&o`+ޏGB37Ԑi(|nE2Ȟv%? d|,  T;4: 4`b.y?"K^'@#fX^z6fRsawgTN)׆vѫ8 1ð%r cA)&i=#.HKȜI۵jb!޸R=K뢶B,e.(ي9P5h%> j^JA)(`*mXk.lR  6K32!^Nm}y˩ӣm%쬚c0&=}}g z}}ɗ}gSU_r(M zrywrL\sїȂSeѵ}iQ p;5\̘r΍dܜn i^}|,Q*#KA'cTBPD8neh NThvHBwCmΡdx'En`9^݄+z>t sڄkvY sHLAaK mdt6vCB/]$1tVsѵcVy< DkCi$6[j~VFjhUDmDD@e;J%Κ~ s#yAćd+\w85ЄMF "&jĄj3"bSư><'YiJ3 5B^AƚBcZuv`oMI?,k޻5/ۙA`q<2wyԅ+_/DK-x<|.v]zF,ǝ@zqx"w U zS7X  iY[VeТjýo i0fZ7(HY둱;ګZ'}/f ( K;Sho - L{S4l ۈ9CݒE{ KW.oQ'^OYl©1=kXo-@t/io;cȾ;CӋx4=\*d/(.4G.Ʋ9 @CI'ƪx׏#Z˩7G٢lA2#q94CT 7)Zgu[*ш)ߨe;ݩGv&AQJPI Hqq1bфy< 6X p([rd K|lA0՝zHN IJ* `,(1ꍲQ(%Nk!ʅ6zµJx;&&WcZRS/(7b#aV)J8kZqR$qٖq;.NG AIጐx6%6EpRR9HQ-De͘~ĚԻN?7Mn~lV[ɷ]ə7q^lf5!8E9*An D.KC䴦JV)dd!m@2Y5D*ߡC,oyǀYNiQP^ҀԡETNv璉é͵<\)f +P[JI$Vg cmAFfҜB/q, ; ] KЬB>wJ0P6mwGP|6w+/0hcпC\W\jP"ܰG.M/MT-H.jZ|CjOb`L/5ޓ['iSc%KM?0ۏ: ԧj5E6 ,J(۾ X5PS,}C!XjUNiJ尶;HYF L+QDnx;pFa9 oQKS񵄛.wH.#cFĎ໇>:QH`J4Ж9Lm/=e.!LQN`&L dsߺFzws pPFӬ}` J]v *͗}Q$[B%&Bsm UeF1GkB7wEBjtLRfEbyx! C /\#@PG#p+ǍzQh?gXR2ϲ%YZOK4W$\;U$P LO\-nc, @~Gia{[(uM'uDKypw9OB :YCAO1h< !3kt|c)piN5ps+dDպ\ʆK5ʀp[oɞGD/'}ǽ7?ϮӻsJƌ˜)1+~_77q? 2,K ~J|oVabGS7Zb49MQܘ/P h^K ް2Y2ڳĉy I %&PTG-:e{=Q^f&{Y$-`Kk廅"ImyqNN&Jfd%TٸfRx+Ak#'Юx%,z[&ƈ^ky2 P C=ſ݈FŇrw DJ{S |RIOg"5[_V OO8Po{. )/ ^Lg=~\|Ƅn/e֗k~r[Qm)h< "(M0)P9Պ[#(0{_g" ]> Kȣ/-"Vduݢz k3f<,\C )gdQj}g!nd ecُ]f?vYcuX4^fA88M sLKTB1$&Qnͨ'F/je ھ|w@DjйLdXhAkN6ilT@:Ak NdǤb 8BޮY<  JBfZ%R2(hE4Ԡ{?&b2>PDC%3A%f-)1/` ~3 7ZnFY=P0 e?gDH |-=)]6c(jQZi\9F~w! ^uhݢKHв[u7KӒMRlP'U鉍& ?-TIn\^֫%#lk\h'_7ɗeM0zxs 2:2i00|τzzv݄z#N_9Ҏ6[z ßrœ@*Q)UZE&Ӝ-izž%@]$<$n:DÙ֋Bqɵ8Ɂy036wѵMI~sY>'L=LIISH" 8>rʙ^ȘSm ]UYXlr!OEldSC=Yn-"z'x j)'/TLֻT~/ Nۧ'ilѶP)IMS zzJ,g4{Mk2e豭S K 47γ(| ZM$@XDWI^4F:Q'rs|8lXWK?GNs~cV23b̞ȝC6d̮=VL=fti;@+4AkO?(BeQ+ύ1ڮ`z<CaD(;to[˴>DA2e`+ë3"@觹neyC/<DhNyRzvjc\0GPPAsw'iVV9 yDN07nי7W?Ā_?ݺJaպF<5{7W>lqHO.><<W;[/FEFECTX]ss,= ,-چqksSxUY8od3n-λ׈aˇՂsv!$7[I>rZ0DT('S ~)[/$CoԻ|~,eS2GaL]Y:)nP#ʔ<3q`ru5'nQj Thmy8 {ɁD If~[d8N\E\w7Z<;g>0wl_yf9[WYz-qMgkYȜbR0i<+@GѫfR.^k`iz[ #L=2Tl0ސK$V^¡]B4<: DL2*1F+bݷ M]+5V-%I=43]OG||oH[(M>{Z/@4$BC8ã "p8aI"@P@?1SM>ϷgBrEO#fRDΧ{@kM'e Ct$5RmF-V pLj RSάܣ sZ(5g֪hkΌ78}G"\6%&-T䝳6cCӹF=/j~ mvCj$U'AK s&_݌CB̄=E{'zS#j au!wsЋ)߻MTGXw177>_f3 aW%TIajR>uT+L=ZyPrг:(RU`EO*fE* jvRQmTZGV(PE"}-Y,:PQٙEˌ %F"q΢v,|Iy@zP8ջ Ήb*:ґbrlMuDF4!r|)A,6}CϾqYb.W^6,].˿Y.ૈ3[a;nL{Ōt!>kEd'g WQuZ:oF|X&Q\gy&0PTwR+#fTȽ/V@]G,w UC^ǔ}uLaP43pdxʓ_-nr[Qv[֍'=\\՞!uǩno{{HȨ ꖸAXΟBku F kZE 9:A.M2>ۏr@{\k2acTseU[|}[(7GO7]T,Suzec޿D ީCe"φbZPa:v!_#:LihH0+9!}₎?OWuPL{Cvg.ϣ]GqmA-x+;ҁUCU]j?ۘH ձdx"r # \J`-HDGO(]$,d TY-9\R$RJ3莝)Vk ҧNMYLA1&6YRYБ b R[3I  ;e$X19-o;bwP(ULz=Z nET44CxsK#Nyku/eB~ tX-VPYTSvsBH,e[ρiF(Aȳz4c5@`;c ラr.}Z ƷP/'' dhJ,T˱M2Łef:CA2Nq^G1ڐL2 V=%C6*Qѝ"q#M?GHxJz~TG{~D03TlBA Ɖ${@F׉zus +s !¡ !LG!5_#9XߓиzZ\[j B7RKXջ*IM볻%:x|CvgbTnl^}VúC51܊z14\7^ǗwWf^ܞ:-ZpmtvsL[hkX`kT mU4|&ZcSN%n0Qj11oxӤޭ-ӻ a!_Vؔ"]S8wbm[-9S6gBN>wnCXnm 5 |t@N]$epk&' -2 Œ\;b};yxeڢ '$;VOOb DvVQ$'6*N1& B?\ ׿o2 Q2sR(qq3ؒM:Wt]qn^WwG#_VNRRךe^z2nA"d6k2oY8e5_Ҳ@fIktpuyrU )DYj/TJN2 #Z:+UW~n7vEIH,[ %(ۨ֬m)ҶA ^.7>_\^)LÿBjj츄|\H}Qzs4xQdEy [[_j`8_Huܨ>m$kBp4?vǻ1JOE-9S6_uʧkS#ӻ a!_Vٔ'!z7D3zZ rL%mƬ'4Ygn~3MAMA#$7+)NkXy^eYk蔳h/>k{jaoae^P.>Dx\ڷR *a{ } S/6ӉG%tT2X)ͩ>أEfWc4JI&TʐC` z5ܣaDM)T BjH;Kg!$dz?ә ORH/|\Wm<[loyš^W- !>&7f]KHֿ[ge eE(njhK`%?Ti3rmB:dG-|4Rٌefq@l"zQ#G4|M?hT<8e@NyۖbRtMouv]kr4i$I[wmq~db40/uAd$X4x<x%>RK\_51ewsvkZMM(ER_[ap>v඗)E,Wx LemW$$TMGF-QZYrY [.qT1$D1JܚLyP#<N0S'̨r i>W0Iݯ|0qM&k˪eBc mH|ba䅭(%t̴c'N  2؉aN -- cKH2Cڀ')k=C3\[9b0"&'&6Fi35ZR>DΣ9 NBhYkPm=ҫΣ ꐓA4țsDmNuD0Gd“cA&Hu_0lCY+|2ehoVx }N6`&qԙ# ɒɤ ȞjFذӪM!9<"2tqwhXőn .;gs?~YMPBK7AbMa C55~%w?:v+'@y fs6R/~X#VG`!-vvb.㯟׽ƥL$]嘕 "y͆w<"f'ucB' Y:B<#b(ސm<z K3*丳([QH52YBmlsXN3YąTfƠfbBNI^d.-DODJ8n1Y!YY3K,+=N2b[WWKهXɕа]۰/-uB\oߦYpB3nl]ϺGx6N)J%P[׳nJAX [0rظ1 Yw+K>4(@KeI3<%j1M]w2!+&[K]J"*ZK : &MTh+d;˄؃\g'paD΄X>ʄSLk3!0fɄQ 㐋3!vXdB "A+{ ;0"jLɳ49=,%,4g૸wB00jgc%`\0IŶyC y KK-4F#}&6Ik@XY$A05#<3p\_ 'UAslRhÄm" fK|iJ㓯a@6 k;<h c"1q|r  SBKƹmޭD{F3ZxFwz 5fT 0NidbkYZ;1 |H"WE}T~XMn)?vzvVϢ xֻSܥe_;mxezܲSE۪pzuc9ڹw8*+$iS)g Oa@K[sݫ(>]MQ#s:r\ k ̢Yqo:l*y>r,Gvsˁo.ze%~e}zoZn?jeUUl?\EŅs{BK%S,VQuzGO5"?ڃ2R@3iN^\Ctޣ>G)*P Ie3Eu_en79EI%EcB ︫<Pe*l]W1/E0aD*u<lQ;d8j!o7G6f[8Z(mg91|Nyp\uW{1̶^='&RJ rbRΉ%U.k(o;뷦5Z&Y$0CIYI`+uϲ)K>Хa7%iab2qΙYYX>giJՒ)#d֣;1j")dgrBWQ+n;瞃PGQ.':*:*)YKCli;))O bN@m8:P+i+(햄eaBLD-ɁVEiIs,I)IҍG-3kT& ,\)ST&(Xf긷@Kةl$1^c LޡEhLf_јJPq25Ģ<XDRɖ cKCStL/>N֊.4ɠ2=ɓBJ>TiT`h8+-?#2-\il_2UZ0!;VWZ % W <#\Y[i$+7^y^*d*NUN ]-_I]]hps4lJz[L€YfPD+\Hv6tN6tTʶ0CZ(pQ̘k)Jx+0xm4G)}RZiZpaZګo}S$}|!AX˷k\ϋX!cRjr 2FLF7)kF`RU)-8#MJI:(H9{Tq,Z㵈蝑FuQ)f"+2G-8co9!8`b0A NrO]KB0` (SK>IسdP{yVFRF PL@ 4 !"mHT1#4'ړ=^ZQZab| >jpLz}%V͖h!}3y06_?Xu}Z@HdTjR \w'П^'$E;[Sb:owȔS^r=%βu275Qt.SNrC0 y7!1h~9;_R+j|CeXb28-$B*d0O5 ]˯:LJo `r;o|Ӏg[`ⲁ|LFp͆wܦ>En:hm$xt;jBs8!7 tțQ4Ӟy!Vm6h񶀦o)_&R\,zɟhkdLnKz0y%L 1cJT,qIC67GSrC  ES ;4DQIGb1>Oj1bŭ:Y0+$`p:C&X&VSJE &SM&g_@6*SS`;enn!'94wX c4a,-Po(<\[ c cb852{NːaQpq~G-8F< z)Ļzv "NYbus7Sg7irSM-XVϧW痥ۮ8~* Ɲ 2^],}wS>oR~?z}?j: cIL>.e-m"YPr-rEY*h"-"r+NU(&g{B X\N?9/m~<39Q`tNӾt81YJP C[7yŎӎz >jpv0uR*LW 1oxg- $JBJCRG)}R+T/J؀q4\ёqR"e%"E il"W*c8QJ?I= cP=-jqm=㵥+] ݺ{ִtWGkyK)7uRgmFW ,֐*98>/ٗ]`l< 챳gb˙LÖdݖlE-ɐdf"bUɴR*M.V,(&JIZR[iBJ̓ '.yR TyRڂ˯ۦ$1r:< -u!J4"y9fJ` #Wb[Sc;KGN omUSs,8GNu=zt"mלwLnba6gum9_J51FfeؼgɱelVhMk$KPUdKW>HW V ]Y aSNW4s?gRgXX½E=D@+j*[IdPi]tDJQ6c #ZY"z|FCIC;VF/ةј9!W.!(i $T/a'álP^hܬ.X&r۰%"YVJ+WPϬ<*i U=@3~;{\7nϱu_L_&¯>)N/,}_ f$9#iZz?'\\վפt!Ttį?]^&jwjo5:x ,Lb'vyڼ^RX@P('_V:U#~=0ukf=cЂjy8芺ִu< r 3Ep&i}1:!ǶomZL%W#8oՔJå纩pY@UNsaLb 'b;^y5n2^ctM7ms4V#cjvHmGכGz[}ŎxdXر[a3{3$rniMȱU臏4(0"K|76i;?l˙HΧ$MȺƽzJB*/T{ɔ!V2HH,{ OCDA_LAa-TpAiY]0e_zfe.gL̒!&|MUefB[MjC<:ČfC}$ymOn}^p[/-b6>yjUVQ XV/>`bQaw%NW߼/^އVl#>59%-gA=~[vodgMz@NICK Ho`a Fݬp4e~VNݮ̗؝\,Fsmtm0Za&r8@ʳ3HQoTώ󑐂 ʶw<Ҁ*F8V$\%TV@+J(PTIku4qK绝H5Lζ5L"ҌG.w+0:Upǽ#,U[F? y-z&@ {磀F u@QZqJ=sduG0.H%GY ҏ, 4ٲ%]i WM}gb#oNFzzsuC׮CHHZ" o K6h댕c J:(Pz!")]ɤbP+]e621 }EKVoF؀BJRѧt P`bf־%~*R0W2iPAY)+CT2$ &3T%SZQs@E+}+~v_Y^tHQGb^=}R-ʞtM_ZxI`ڸ| bo_.b)ЅPEqA_4Y!?]^^\^RO~̦) R/gg`1-?[/VDZ|YV$3g-5XXd y8y-2SpG#M&'*"\yvytcP!\d novrCNW&FF""7o23͜t}^ :ky7wd>ږluϋko.)`6x^s{~J!? 7ŗ ޔx@ zݲVvgp*l򖻑0QLFc[_3Qpk5Mv?BZB^ߡݜ#ɼ{Hr~}Mʣ'6j7U= _0­p =aXoϮ[>=9׃1RX8mSmH^7ԭqXę&25XcÏYǾ'HƜېp%?[X3Aڽ3; AӏCM7]Ou \C<8?ԵތhDh_,^*YaC-f#HY80fEGNZA2:*`]GIdkQ+5SPfڔDZ)'M҄ ,r+%2鷞5eTQJ&eQ&m[V@c[GHŢ}@,]~0xZ<ƥ(hгyZְx,֩sKNX5,i5K1[bL ʌXcӅdV{n+QI'om)E#2^u?'Q*062aN %–%šJ u@KWo!Md0('TZSE O1O éRH p:V6)E䲬[,64̈́MtsXI+qYBhq9xEYLesò4^("+^~A@; V8*'A 1X:E# V|d_ZbKokq#7E ~X XgO^ xb$Huli4KKfw0L_ȯb#HS\n5H#5wZ F_q_qi4Jp00 l{`YhKXMm7VC+)v?F(^yuc.- nM񄃤znhKi"UfHw*xBQ;dh%D)T,3ej|F[j~,[x; 9ȃaj́ SA;`hQXwXis&S;.\ԎDm-ՑWBG-]ǛVR-B{*3T kŭGԐtf0,կ 80C #Amr6([Uאw)mE߂{>;/CPLmԥե! %/YT Jbǝbzv qxl0zcNig ݁@Vʏ7}0W㨩$rrT:겒ZHd7u~x1Gum+jA9aځǢ߯jؚpU4X%t{t:+^4vZ1Q )$P("R!I 7H (j\MRytj4>df'W5o5VOQR ᚷZY:  qonqjWZQv}~g/jd~^IkE[^MF5ųZ{]삔6Жُy`C;/i\\'b%>TdS9&H%"9 "aPzl(PR$J ri/B%K (E&,Q&~("KRJdϗvg"%R7|V_ƛ0blk=L|O(i'XU}ɫueC om9iE+Ud_[nEiyYZC*6(`~0G(0XD&|f3Oa|2 Tm? ?0|1{tώiMS][l!ʿUB o*QH'hnlF($DLju"? zS.J[toG_5F@ZJi5:v@{@f˓֏|uebn)8q\@+>=F!\AC.aZHSJtjs$GWCVSG6#`9ˈ0υ1q$E|g{=q~拧PgFH/**J4R:?sK)of.>qt沵3jLalͽhս x3b !Q9=2ЯjpW,xB@daBZԐĴtO1CM4l7܆ol2I9@Hqw9ۧ]B΋K~Fhee[Q.U2xmQiHSmTC. Hh]qkP֏5)8\фh m$% Ж3^ni])0 86F#/HH/*tΩʍLr`3682xǾ%ż6Zv-ռ01Gװ%!a:$=~n V3_'"r msc9Pb#7H Gj&r80żӃRvBY@X CW HI3GeNVK"b)2@9 ODuLSZ8jk~PPJ6VKJS*9̲6ªv𿍬6%&8@5蝱.c. snٵ˰/}loݴ|uqi,w+ 8gBk9"o ruǃV36OfOxEG2+G%~a=%)~߅ۙ[r,ǂL%8Ls$_ּp?ڵ wׅw_0kZ{֪[D\_;_u @s>ݲs,OGK9̎ G!sy=sWx? g({l}uߖ+f O zNw3Z,%R)0)A,SNjb,.O)؊sD xYЋ6y젱dx唑[]N9D% b:D]C+SC]PYPfSmCUU\EW(\ W.P kP?A^PLxtL DبPO]PL0*:r >fg9X zC,;EaQg GSV:W BWQFZlqFEX0ˉokrrݪ/C6cf,go`Y!$J *I7Dr#F@ $p9rk 9T*-W|.;ǻz/;{m6 {[!M06{"G XQ"7 *5E1h<ǐXN(7R` $s+ȡ9M C&G#X272^S_ީ/׏ [Q#lm,2: v٫^ak^?#59ѧ9!4^|dg/wNob`b&ghMPArj(9\.­.tݏއ{o󧅛p"#oSx5zo!`(oM0C +eL&OʁIhid씟4\#~Po rIkyFәe(ôᮒw_2](uvkPbEԠ|^h-~$P]Yo#9+_\,R]3Ӌj2dՖӵo`JSdb^Upe/qw LQ) Z+#U)*rJ7j7`nP]K]G.B r^bShY)aP3G19 VB j€0%5x&rR,rʑ)DI$)F' R j4y] 5"b:Qկㅬt;^0p³^Z Lq{ Ndب3RfS.dv_'8e.krfIN)s93&2ͥWpHMA,\J[TNie¡1y"sLuk:IcaT ^ 1.(= YoEN \UfSVPI<+x]Dǹߞa(3&X"4!{C%#TE6 ة#Ϝ&%0 1 Aجu|&rx0A4)c%5[67*Xˮ~[L8дE^"J J><}ϯrc5Q7?{4L uS¢ƅXWټr`<1N` ;{|-#1k]̶zGy12=#Ors1.=1`sk˞hT-jXmG(̿K0[g}bz{ȚZk{$o֮ڟpAfݖt*7(6 Hm".Y()iBg^J [ɢ/E}Lk1TjHm)|𼑄4tΛ;(u9WMv㭰 YA|Gi:c$0Bj#&1Z DQLC8r4%I 0}]=١C-; G2_4TsF΂ҐB߮qŁbFk$Q_ADσ="zfq`k 4s@*OCe89ϹUzo])y񼰥./[&MnԱ:JBO(U*@@J0OJfJ(Thw(F4 y3@cjjW)nH 0?*=>_3M _"܍xڭr2 za [P9q >x)+~.Z1IgАq9)9XAEv֔v 7@(!WdDQ#Ė3FGdڙj5@w$k~v:K۞]9MqT+)]N6"zv4PD'+蓈ӽ3s;DhècZDO" !ZqpO BDfe]͈:D_9xSp U$S5pfRT3^B D! M +rrF\*/ QzW1!RaJHeNC!O~!5 ]$#S2rƢ8lj*KUAJv$B%/N ^ }ԱI-ekA"Հa L2$uhCݷSj1^Wƫ C46F\ǟ9=BSAygZ*PQpH,ׂ6 kħ%S .)흛҉iFMT%?[26\G ̄YTkQEC]N!Jt"iEǺ4Kt)z=.ogĭK021DaZDBKx!a V倔Z 6N-LaЎw>(H,>b\:DNN aʈpq|ZJMEEBN,dɌ\)r/K 6L@\K)Gqn;" WϻI|B1il1|ok79Qo Fy=٢Ւy'ih0'2 f5ĀC^H16~\ևKg*ۼRaDbiR\t^xtȊW 9Q?~+Bi 0ae)(+{ =_2 ѱIaZߟ9vc;nd~tc 8rxh;^h~P"8y'hɫE??J?GygMIӾ1,O.} dzgïYE c11LGqJw1v>OQwIк>zIoiH%(yj'WS$KoQOo'7w_S[=8  xg'=ڀGь5 -@ pt$WO>+$x*3PVUJ\TJuQE+/T%d*s y Վ+֔`|ZJ Ӫ*@+Es@T5]u l%UxAxj7EY:*(ӞsJ8(T\:%:;z<hz)p;+s_Z+4 Ģ=Bh5ůtCsTe<̣dzbY*@-V`k]?!dg';f>?Nrw}RVZm12P@֕ZjU_ ] @dLU!T(ƕf-)qt,- 2r=h 6-V:_E ,Z9V,L=y(uq]~;_P0xY;TRHe/c\ N?|yI4C|E>e^3)f]˧I5/Vrv}qvi3!cΥ/2ͿfOסy pmhiۓ ݧyīK8nTS.A q1 mY;d ĘhRa;Eṷ#R5lA$KJ[E͘iTaY+^"$2T֦wJ9v("\m8KN?#4Qbȩź9PX$,O tSȦ do?`[j>LCJ<~lyIs@>s6uT@lN^Rf"$b N8BiSk2&@°h^J>ʐ 2tː(.DȺƹ^9s13mV;O&-N?^٣Z#pX[TPG01EZU^ˊv+Wc_cHwQ7e[kYͦ7?|𢐴re6 8*e*UIf+v€)' Nc@8ɂ\냄Wߦ_[e*2U%TZer+&XA)*$!*/TQR9'xEa<4J<&7Y5+EP0;% VkƺB k} =ύӖVĀ-9^C}~1-ou)+%p@UF g-&|UV3 R %s4Uioa4ozwԬIegN~zhqvH/n?N£DHȦ:%T'ǓEy;y}`y ˧?}Vb/)Pmzȏ`k7Դ/Q#89 e$AuX و!KrZӯ_"f1KT֮ ٜ;jv׉l!;3UTB3e̮]|W0s/)WfG8̲ܺă"`LM1kFTÈ7;$;=^ V3[ _(7w9BqxەP y%ٴ, T۲*%JWc]i73b{3&LMߪێw=1 ;eWR 4ouF渒r8EqmqelQz!j"6+/ϟjW_{!ctF9 ^e~Eih$PB}8QH (AZP"P=Ծ==`9U[1AuuE{0tMGkJ<[' 6Ä-[fa–3͠4|$s4lz*%m>B/gڞ۶~)m/C'N;o/f}i&]r%9m)Kԅ($ը-SKZKѶ&s`򶫹djujsy%R<1"Q9Mj.>oh3pc%L]rghZ"yWWףǮ}=z3y?Qُjd r`Zyj~sn[GI(_̔=8UiV35ڼQA<Nܟ1Au9`ϫ/(kZ>[},T)ӆǸ_\'GWQ&5'xGjYN҆|"$S^'h7N P |D't&ڭ}-|l޻S!!_n)t @Hsxo~:WMMa^?A"A$~s{}wXѧ>ܫuW6~#v2'LƮ)F/uk%;mh󰘩Ibix_;1Z q7 JfWo޽G㻇{7hse䨷 IxP%42=!AL!es{XQ9 @ٝxuSՆNcg4=wb2)'1ﲀU̼gJ>tiOgƚńxfz咨RK@2Yk%Ԗ!&*]0mYsN b?zkB6B+Dm@oj?+{3$=>Ms~od;M]n⫛e[I OlQ??X3hGHts%aqfҼm"1{S8:G>;\ ުlzpԙ-#qXssV~mʦO'+WnU!41HfR`햓+DHF 1ie|vnfd _8[=V O\x%/"N3svYYXTN҂Yo%z?f+FU \g# npb VZ ȑ,;pj,9qErk%"tolv~EFДpi=P.,2١sUR/ڢ\|rzm/_@Nj!5cS)5rYϏBذyAw w>ԍJ cmxU8';T/8'FsC{{nSH_0.QPKcA}7 FSMo\lCX}17@MVT~k7Hmy%/2\E!Yp2|-9<iI\0Qغ޸]'I.s%tND9,E"fV49 ѷV;]=*YMDžbݸAXկIj [!Z%?/na|p [ cۦI5(2%(FRrΕVȱ(JR)pѿ|vatEQ@/FJ.:nMeYblWgoa\Lฯ)IO)& !BC;j {5M֓&Dk s1 %5cvwTv"N98嗦~*{ 4?4>;.j&p]3 u]T 9(z 6N-?93q+TZ(zN7B˲hP9|(:/Jy51YOc[bGٸq)GK˫Ojv37ˡof9v8 ۙ_!|֔P+o4 ]/_VѼ_?$\/;yuR_\eBfXSy.!EQ{@T0R3k@m&^[Qju}/Ylyo[{(_r4"L# t=}y{_Ow.hҵGvt qx¥kwuڀV{z&{{6kFR*yڍ`An3R~ߩPH,z.=EJQ&  8GÄ2{m'BHx"3n'L$6ysa(98l|&G MMs*prbU^"9Tcmx#ɦohcJM mLRv_Dҡ!|XGgN#(!F\qJ!ȍΌ_L3QPR]V`e|'rv4ZS3V7ICʴJ,E¡*"=\<[ar 6@."v6 cE DO$JKt T PُT.2O۔bbf7u S2s)P X,!T[so.$$va uTNtn)]1]|[TXu~deq Wp̹^l;hW~ tuYZn#Db_qdI7;%$7ֳq2P;_)=@YTX9/ů E4@xoĈڍyRͮXFhonoB?}Z 9dؑ*F~|P)K Xm}VOƊdl#V[{5+;оHH1}n-TW (z]j>=[aPjA On}뷖I*8Tڜ2>52t>\)f>W)Õ5SIh6f3 @ioVx\"3 f)\iVpw?^@NnJ8KW++cg3_-_^D/ތSZfήdlYLi3ÌqٚbvYU LMuWDOnh"Mpgro9 r]3SppJnCH]q>օdwhS?KܭsqxjPTBY+czȍ_%'gόY=,lm`/ֻS tٶbV#9 9gGUje]ª"Yo-;vw?*_==j/Vۏ?~Xf3z~9#./N0-W'.BS¦O˻WO ) r&`O\~G)#3 DO&UTxj3^Zg wғ ޟX~b!s.0 XAd-ᶋMacmP~ο '^ikgk PL=[8L6[H9liƅq{ -6Z/h Z1hw«^bzZ}Z/.` [!yG!z$Л( Gli΍Xe0`L%W{d\W翚3rao>$Z3tŭWw-_UƽZu6_\K*R+^ǻA$Or2%Q$1+kΓ -k#j:Um`Q$:xH%( ݻGh ~#&)(Q[ ~8,ٙ̓\$t^l [am`0Lr ȴujT)cst&If,rL$ljbG㩦B"SDU35XíUTZ _WbLlbjZwIUɴUsOakbjҒVb"[9r0Q$uGA`/=l5  jdxmO x=ME0pa-VS 8sFS-㨎NO5b=I=``/O5*AتEuf2M7llUrB;M[whb1wTn4V9M[@Vr)]HOޱn [,!*֭5%?hꐐW.12>޵ni,!*֭ +Hnw[E4J>K5r5hMP,>= cMRpcg?UkMQǓ=EIo W^0u|<Q\=76p>ۡӉsp~\'8G>[gO6:5t/}M m}WjٖM=l_?:[z1 1'vm6sع0d+&*Fu|9]VZY~{.WE_"MvDxHt3w-ƺQ, Mt.Gݷ"%\xbbuznpHpUnU:Y,?~,3@((en*dot t1w3@Ywl~i*Ĥ񝯕5w>˪YfUwoR_#lZNɛ -㌡ڭS}w_,vqvfDN](HZD|X)b]hv$O|-u66| Zkɳ?Po/N`fA6&u]~nXsi0w1BplNt$~{E<.䎬FۿRkk2! N̸\eܰ$e%r] ` ӣ.T{Q^=C`^:v@l#+^aHl/8A^# zc!&6aKwY):rd夬y΀xikJ¸YɐX{I| C yǠ"zҴ oGp ($u ф6=>]>ݺ+]NF]!9?_2f8J/.mkj7#cȷpI]tAдdz~yUg9VC5ߺcѯ~使E$y"-9&Tu%5@e4,&&EH1I4)|*y5|,[b&')~'d@%8 wi߃o*tV!d:IMRΩ0\B fy'ZYDRPkʕ[:F0%UmNq2/W [/V7H_Dl?.O,Lm 59hjcfgyNܖjiIdFU&g:V[FH-3#o'gߢk$Eo'-9[pi)wE<\>%ZIH_#o(2p?)>k A17:1-Sae:hN4`3rW HIK2VTLNΌA'T VdQ Ex jɁ2B eRsATF[9A"fZ0~-V̇suzm%~syH~I+J2.t՗~|쌞_?v~N\\/uO*Mw}M~݉[+pyV[.LySe2b937+~~v{hs":Jӧ'WQ G,}a5HeCJ)Bn;:"P+2`GיD@*4P#2 9ɜ9I1-@fM0UdO rGLO&OO!^7eӓcWWB E$ya9/+SIdTQ,|i`H>]YΘ)c X+מ R0h@PՆ $OS3iӔljuFRH2b ;)*,-':-Y{CAyވ ء|濅Lug4yg YO,w9#1{1daj@9@#ka֮SKP+ ǣTdYuB0-C}gc uA5мxD{ڴP2cx'q0l=(ZR1pæakUJE0$wzl3@uxgbk"kZ1*G'Oy9pD#]mIz+>幠uKxX _NZ+(Ÿ,mɲJFh2Ǖ x-޴d{u Ǯ`)Wz[ JNnD8HA#%HlX!LQ~0?3eGGR4'.ƫ. [Dd@½N*S˘RRR&lg<>Ͻx^`$yxn(Iyµ! 4J Όۜ&*}F/jXg޽|sLe/e(qJ*!2*ϭL ׾9|N\d)f qKN i)Pe#EQ-@3W7ޢOf:բD"0i|^Ĉ,p^dIJ RЌ3IRLDd,<;5+"@Thb*AK8K$QLY*eLiH(wb(X;wM0C"0HRN Q,ȘHF*N ,hta%$`fSpmE Щ0jlJQ!8)mѭ!WL(&d<?6Ƹg-F( C f^i4ɳVafrLSa'"A:)ZgH%0 PqF3XڷczDH=}@=?(3(J"5ejNj!w#B%A(y"Gq(=>bDɳE9]JUYz±dP$W %"d)+st8ˆ=>r8D9,?8 T;n"a#`A5‹f;Kәx;7UH#i2c8iY'?:0' mU  B P4AݡGh?ۃBj߈?{ܶ /gs*~Qvm%ujcհYator 3hhB] B'C3$ZJ7GiBin?Dv/y9K0=Ci) TVp@RWU/M [u]^њ<+ȟ/bL[9FsC0E-UAd5J Mz . u|4YFHp1Eo?ƱC>\Ɩ!C_C@ VHlυp(J2.{_?`xqăPv| >;CbQFJS#^y//ظzyk9+ud< uS;A:` ^}a3W_RHgӻnkca``1ּ$;l۩ \36T\2d8Tdބ80tUNǟ9TwmRYxLB܎5 _; Ȃn ^=?N2;ȝԃ]һ7:tAFF|RcA4cgboҜN8!ЮslqA >J1%WZ21,*` 7Ƒ2kn9wC3wUu+Ux.&SD1^ bW%8ŜUݼ]--3O+4rb@Z dj{Q(إBUKQt2_ٻ.Ú`[3g sBq4$0F`r#q6Zeoiw0ة7Dmg9(:[Zn].Fl;Ӗl-T [qZ^Ui<,o:7a$pB%/XJt֦pe6N*=B4CIb_-&- E4"V$ JH_ڭnKۖ7d &iN• `P/ uPPD$TE*K塄lUlDD`ZBIĠ0JhOry3*+LwviMu4Tk%X=j % uԡ&B p5Qs7Vr^gժ^4zZ%[* 4(Sq,F#8dP"H"iXST$I4KB/\ʓsqs}gvU mV>mtHRUXAI")MupbB' ˝} Jȍ(HsƉ>ͨVZ䯎)yD*+x:*Asy< cQwZ d*xX0(V>)O>=VZhsuC0_%/gޜhLnbEb*ycƏYMPK͍{jplܸ Ab& (z/iޖsTi xE"DԳQE _;[~QqdQMx(wX < (\Fz^_uQwd#|ߏ  x҇[ƭ>ȭskspQ!nq9ȹ[b18F^:l'{,[!Jh>%ph{W{Oϵ=ѹ`]٬`"QA:ZiK]g/.zN'&ku"b۹{<΅ub;3qYkq}UqufI;b;6nxlgGs6)ϵq5e&%uYzg\9Rģ_"d݄joXIBr)~^]8˵>vŠv;'x%kMח)ꐐ\D}dJpu=& R;*ng Rp<=;V|"%SJ\|˾vӗ|1(#:諸xDkͿ]IV|"zLqs)poZH=xuZp뷧T|@fXzcUm;!- u{χ?p䳽uFy.0 "F5:{AgJY H B+a7.je m&اm@!.˴۪ sw{E^d-s#NFJcjǡy (V&\hK[UkOD|P6tIg<ӛA?YU)3ݧO^ E%/p 7q̥cls}_#lPxL} 0r/mpR 2gC^4By*fe>K\*bBY1=^>.Uiu(̒pNSȞnd[ í3/"WY[{L<e۷v7viV`cPq5y=E(`tΫo/s%/_n^wAU|&Qa!n4?V"yT Ik7IJ*#T6/vFa(YD` F: 2s&&%6!Aԕ#Yڇj%+́4(U(wwGE$+ZZd11B+X4VBLTZs*1?R  jZTuqDw ^*0뾟L5Qjmpt@%5o~9 :i"}C)g5f͔J`j^/bAk&a8 z ?9SV\lL%ݼ}r.> 0.שIq۟Yq4,Bwjݶ2ȇ/pf˙R$.Ipl'}uT.g0b፝t PO&S<^o;O0e*go NaHhLAHcxb ,h!Į@`Ք:!ZYcUa|ݛlzf8`҂߽ ۄ1N};Hp9h˙pe E]2.EzuNQH ~~л0SJ]݃;0lVJF+7aDdB(ADyX128 qb亥=⢓C"#8W9u_;߭[ s)+*quRk+׿+㔀1?~_YQΊzpVԃVr8kBXk 6dahMEA`b0R)d}mg!3-,'Οn~EX Eh3uSDxFJ[F}S&2)U#3_t4=y%&\ǁi:9mG yDfMq0 PSYGf#-E@,CU&'!Qf &^8*͚L{02_|oqo"gZ,;, ,|-oVcݹ,St{rzn6ɹf^jj?6uM.ld`z;9H*Aw+qգ0E% Ϳ[v:2)bZJtbrK-F#Mh<@Br`<Xnü)= ޢAΘqtn+puJSbCu*L^1dݓ/vxt%F@T!~ce0_4_B)+]S[gf1uuM)Q)tWoucXλ8 ?P$ې()t p^m }e:;q8W*ʈ`9V8.RbpBwwT.uge:\)yo]#K* rߧ6c CI")MupbB@f<8TVQX)#7xb,f~Pb.ڄߺ]k+K,Zu#6c[ۗ]}Xb!@isl#DBb 91%\DŠ Q,JP9WB]Oֈcl8 s#ٻFn-W f.J!@?0%6b[ِ>$ۥ,RM;%#!\ηcFikъѢFһ9$`wsȯ# >6 YۋVC6y uwT`>:3F 9WNCf QcDiȃmP򰰰p/ȣQ~$Fm1s?ki5auŨکE~8|ؔ'b'ڸWFfJzurhD}?,9?}JrEӇ4[;xMx7p0jldN; YQt~JlZ?AnV2 (үZGPaT9ZRjx%fʢM}#gTӉ{j'2jVKUO(^Y ȕ4n*m+ LBnC(BU\h䅂 - Vss 9 e pC`(!%yi}Yb8'R,R3M@1)9j:>=  xV~.>g2|j.ϱo;;[|-hί1cM 9`eqέMSzhqTΖӯʧszN[?nF)ʭ6|aI /%󅼡)/Jxۮ>ђg>2YNfQ9#!Y7!rl39!n%Jwl~w9zć*;?^}mf_L ʛ *]7C"m],zn GMآ\48a WF AXY2TUe!\J4'y%9ƔBC]W.It}VQh>E‘r_g۷}J$oI^Ҵ>Yɺ?>do/{-^3jӣ"~_noKY%ʫ ''u0n=GnjZ,)^w)a~@wuR n%ȵhD,8Qzȉb㮞QDpFYjB E%p{Pp,ǡT^=Q}5Rz)R Vm~8|H)ZèHW[(\74w`lC50dz[ EZuW$i&bԨ -7޻3w` B&PҞ4cQĂ2b` Rk,HPtmywpDCv!NmmnVC#=G%ʥ*O&^DZ$UumUh,A~~:B'?,;. 4Z>`E};]N!$Dww]_z j{)!*a%/4riXsQd%QU4|ge.T樥?|"_mn(ͯ?Ga3FIdBj'kI$1Hc B]-KpO.LFŇx }iirR[TޞywJyfrYrhO[Ys gFENyj (vC5aq_~tW 1(Q5@lov1Z)dHhn'<[~u-W_7bA^zZا/3cgs[9"?׏╷xūzSwCN-Z(5ZRRac^$b%#9xo_o.&% ;b[uJYB2JPAu3?w+\,W>5іq0U,xPB!FnZJk5\PGIhAI;7 t‰IelHs., #TaL!MYZT[і%PSU핅J4GAt[a (徸dpJSt@JMB5QՐ#jʝ8 ,mǥS: J RX[R*0q+ *lyܾe$Z&"LN V%w   ǭC>pvB9ՙs@SZQDmn=Xr3{x,pSͭTw 7c(tcNpӽ{-v;alB0&G 㯺Mc~hv𸊟0z9^aD T\vdӂ@Prգr 6PONaoP%x a0D:S!DJW3p8݊~~iPֹ% hDc㜰rh<Amd҄#c=1CkHğGq{_bזO,VT zRAq +#*h.]+YFu|cJ$4eF"&$0ZOQmHCXE~ @&p/jm_Z'u:ڈkeT+2%g\U*!&x ȭD{εHWTq;w[oki㥈R' W8%w{VaŎ9CPRySk259t|'8@1:2H֓u ']WL@ INW5̽(|}ND.1z-Ӫ?cv+)R@d_B,_k0pn۝j3qt\0QaeBg&}e%I:;ma!\rյ{ y^ G j'77fz"όۂmmtBjCj][Wg e@-#%:Aوi!RkAAX5V4I5WZQrJM5b`BuL$G5㟂_I*m@>Z?on[C`r T(jS),W tB)Czavm޻Kk prN >1b%vPIC+&*BQ] Ven*k/Z5u[!*Ej~Ѓk~-hW \jN1( #pJTB"Cq"1ALdO;T~^}N2 A8 kd>E]ˊ\tIIu'}0uQ0.Y[XQ1̹d!-[RX ,(- ]hG7l^4[C5n e"=vtbQq"i%F"rU Ca7Fm3Pt4Nn )NiTTBL:h7-pX?M T}iKO󃹞KƣZ]ÞΝ_'\jΦwv- ͇ %Aj|/+usZ\^bL[H8.|@N@& 'rEfĪPTA1C>#6%9B3G=zYY=p~c*3KkJS hx yo$/;b3cfe2ͯgIfmcfEX"섗]o?ffNJl~w}v暝vM?A[@`-l`(֔`=Aw#*-+Oq9\e2abic^; Fv$RUgt3ۘЊ).q{*ݵCql;Ȍ:+7|v<`|EoOCuV"G֔<[KŇE5YV(> {h9F=vPCQNCƨ"Ƴ"= 5x c N$4 ]'P8 8툂WGDf Jk1_DY0ZM +'BZV}WHЌu|Uh4;Lȶ?\A xSSx>mgz -hW'5б6:^]8BS6hReo9pIZWҡ2FԢڍ* GSm(mR|J/͑R^ վT⇋or}=oˆ+|4!M+~sӗ$7$s_?0W~;̕a; wmJ_a˾ e܇Uɦ^eq0FFUkzqpN9,+%f"a6f׀nHIe&,KG%ԗD42LɝvpyL2xe[pp[B`6.S u *z!_  f340E6A~;JKǤBH(tF2ϙvf c(J(r*4ThwU'/IDx5Lu2MJA Ug9A`3c-l;䈊 m( L`\(t >+͔-3DY%s9B 3c\Y-)0*Ke1^ލC_[ijՆ.Dk~,u^T D}! _n~pssw1#DG[G:/ּxbţB\uH:35\o0>XE[cv Q!"ŵ NJr x3E#* fHR*ٍ2Z=2$aP,YC wj!v!8Ɂ4VV'!I>Ә/yq³a 1"VpYn[߀ofzր. [_yC O 1 8\/g~0].~1ua; s%zŇ_71׃(ʦE ȼUK7h`-]n!*!{5VD".PD#gQ(WƹV֕O^A./k `./h& eTcU]MS 625r`F=/NFS quf%=pۅ%q< @Keգw6#d[?xV}zFInF*rd ?OSZz^Q:v]mΟ8c9;RYZ ] ꑍ^v~*ړD?)C`Cn*acRxN˹>A33!_-3bՊq3|ƳQv1VV(cRD. k&WlLԊ&W9d4g/9b57q]'& #)azt(xK9n iVQkhh Z0Z>C+oO^" FuwYpBC P/mn(ٓuŁ\M0~B`ܳ]Nu[],] a P5+\q6<.lzy5]_VOv~6K-Dϸr~Dܺ)(ms 7 Gwj29#&r 7e?ƤD#8"9^q]{W{# ]c{ſ* +wJ*:g$EPURz5!31=ucH+vф=KA/T@yÞ0/ `rW}3 o'ֿ1_S43 D2" @ɸ,Up$O_Fڵ:bԧM(J_ϴֺ5WA=Ω1xdfNIýsG:GrkEn#:2I pj"ٽcD) Ii_% S}B0CV[poC]hykkXԷ v $~(qPlAˊm|2xvN&^~u%{= r[|d}F ܿ2J}RfRYFiF}+E[s]JOUO3B'wzby}FY*Ya+_ƊB; yVEdc~n񮓕?RgX՘:)w׬*qws_sV;j2*Sq/(GhWpdΙYIE *d;[v,?D4}}{U \*hZQJ]4ߡ8yV<#(.+ϧQ^IC \(Z(KEuKp[SF";.T=d7[o̭Z]nfmO-ucC#NON (M?OB\ 3@?7uVn瓫8$٢pl9.S3Tq l,hRC(MdI&RRS)-)S`8*팷Ӑ` h.$NʚP~̠%lNf?ƧCr:\/Ǒ_R(PafcdלF; &Q-xmI|a䔖Y2vbTyuJqSΫgC*RtYy!K1t*LMo*LA&r%3{Ġfxɩ֐y0%'I!ɪi,L}ِLgtEH^>ɛ^mMf)K"ǶJǤl6P&歶 &'uMVQ 4i*NW5.Mpؠtd8{5POfIgo-ҮӮW;j2OfBzAQ崛\~ul](,55*Ưn-wkI#>{WUUiU:|5 _:)Fۧ}Kj Ŝ).M5gH)u~ȹ(Ǻ-:+?$k̦+cNl&{MYㆎ2.i/ps c(kRp"Ep%)G4=bNӳ(C,Y4|?fGvY=Ug/7M=.'!yx6χI,pGL< ! \+ %XN^"2ke2"mTk&IknװB00"H2ai-" `$QJ"åKZ*Se8Fb~JzG 'Ќg;fNeW |eT{{O$r(SY,g(n5KZR ّU*GӋ1K^EK)VJSiJCVZ2 ZU8A{HR9Qhɹ}NRƘ)3Ja[-m\Uڷ!U˜![O#wj族C8EwJDAْ3*ڹ`4XLZ<39X`a})+Üu96Yk6.?_Bw*k%+ЫTZ:&P*# L:de32VCرVapBjB]/t_#~ͶlBĵWѠ / paQIMgÿ8@!21q9~M#I0gip+q̆7FkmƲE`"/}t; D5g%Y*IU6K[*s> fi g"1WR1-[_g9qhS!vP}Tn~-OO//pw1S5r T ec+d2;l\Ɠqd- hE] P$%rI+1gȅB:Cxد{q1kanSf5 ᥺9:Ib*g-;w(}1*3J;CxytHThMp{!I@ɲ@Vm% PZ(ѫZ dKT5h7z̊3(nasDPܯ@G(z֣@}3+= 2.DZjŔ'XKޮ'2uޯ3'c5=,p6|վ/fq5$WKPɋM1'V|Xkt[>s>;)";=~;߁o;p $!\ 9:Xf)GQ4Zd\Xk2A %L)"iZlCQNhf2CHJ&QoQqIDǵ5k aPT-5g g A9QC%XY,N04Td2! &JHb ̡NH8[bFÁK8|g&SZI,L Ȋ$ 721P.@ %Q/n@d"/Z\D*z5 |gfoss5B3qaZ<=JWk;CD~wO/!yb[+WO@x4sx1Ft\sbbBW)7--,&Oj6}KSG7&DK"aW.[K^ #_ÉUnuz*"x -1=VDqEHpS= ۆ1vv>v;;f}Ui=’*tm_|zftۤA3ZHVSA%c 7fHcC'H,sks%4Y$4eF BUiEH(<ΐ#%ZR ͇> E$+oh<>hɽe[;xLxQaCkn0n H>O'ez x_n :RgQ+`oIq3*AU)g}K&&?@LeYF @)\:,Fki$3GXfX1_Q-JÞoؘ. jg~Q1hબrB 7ka[04)80x;~2@m[?p^7(m< *Bd6+!DB`$1;vos,쇙Iz t~:ܛh4/l/jX\Z:xՃ7 n]h#ooޙ%s-0ugF[Z)`m23M7c6rVvd a>k)}X=^='q(' .VU2]n?u"W.&<\E!d~UZ|Fo`=ŇnV BωP>jz;87R@Qp~{䆨Rͼc:wVe+`{(.sq@r_KO~p@^}&|?yZߜ ar)a\qt!/ \?t0n<~zt۬TTl.Sf h/RZ] .j$<]~@[Hq$;?顷^zCozh阤*eq"˩T,lfq%bT yas;I0`؆c?yoB?+`<=+h>~R Ri TqV׃z^bUA -8lM x姅#Clvɫ}KY-X*u&N b.=0R$KVXt70oP ZMpP5ZWնΖ;QB{oPz*8|c$a IG@Mpx 83j1"\/8c_TSĘ$9N fFAӆ=#U{c}F;zP Oek$ xC"ƌ&3KZ[ѝ>8Npap1NxVFׂ ֍g B{iDcy T!n.vo9sҾP2#B"e̻,@w5R&S&Y9K\iQMI )&3iL8*SR+,K.(GîIZ;tK^Γ҈uҗvѣ_{=Eߏ~at~@g^ Fϣ;AG/tb$ ߀}zZ4oK4FOg&r+x2F` {Y]Ӑׯri!b+D{ӹ^ \+O PIۜ'V: Օ4Jm>avVڳ/70uscz|z;ku zf/9j/atwjP[ۯ*,ys"Kk$^.|-EO*Ӻ$mxl$!oZZkLއ*&K.;;U1[61aP[CAbX6wS$ƌo|)]+6$` 1H ;ePqiц5+;H9k ~jMKU-H %|ƻcd ܬ 7,,ظPk^%)I94}$l^?-Fi1b#ɒr,E&OEbPD6q!1(¨ǚT%%vNaIb(HEWVQ*~d)nYއMEL8B kkx`%)vҚ(@%DVJT\yAs мjs!0)#F_9@62hN'EυE;kL%ʬSR,2;3 Vhqa"10q .->탻m{pz%Ņ_5?B88a*e2? g5#339x,!ZTJ$Ib毄wn(: JrYxF8+.#*, dq1"Ӗa8lH{ 1ICqPb{!r-Fg }Zpt&>I={xd?0?5shSiXvl6s>+eDJ@XR _3c6|籏N+PNʄ ZPkiO` vV?[ ZvE12]0 U}{k `@h1uJ}x6WPn1`˛d76vI?rMQ,EywiEvX BNluveTvR[3 QsJv較ZaH7$J6H)z0Al=;y<s rk|6 =~RiJ"xa P6yt$LtjA!C.^6-aئ\pxJ~6ܰnA^QOɉB$v=(N:v@~s˻T] Fg=O Ɋ!HHl)iEiI:$pA\> S(g bqZ})E)"Em㭁#Te4[_dƍ>Kg:,ƒkog*b-IU1^7pS}5+Z#)Aݩ"ꚲbQЮ.+(;//ŨŜ\.64!t\' c,a[KX[ڇ)&[{KA;hhe{ լh 2ήHsE]\zu6Dl)nZr'rvNNj!"JVi=1VmAWwM7uC\-.V{/vMi"/N'zMc=XQEl?cmgsy7('tu >}9eO 7r\!N@bАWѓݳvԅ+S7^tL$FX=mD6;!-Nՠd8:xT5;U-Jhw'TkR8 ϛ*z*&,^GUDd]W^]:i)PPL_{_A4єPm-V!%3ekݕXJɥ9DXiv?$69qD,> ^c<aMɵ\3[9:y[eZd18p;~6-,9{lLW<'y vEg$[*#JǛl|*fQgxZ:P>Oq׏)c_%96m< i$(%8%T E9)#wHNS*K&Y\o'%g>Ѥbie %XM՛bbHN1I 7+ԫv?8H`PJ(jV@EKaKETibYh,f11ir" e$\vc@ [tzUe'vI{z`5a\ dOQت5]Ej0"(Իm#Y_l *=l9NV\dj ,n(JEO*^D8pR%KwTg^iゝX>&x:B6r$yc?c>p$$^*sI{y'qB#'v%N JqB'T0$sq%>N8tJtt*Q }PH *8* D rD8* >NJGqf8ґ^d} F>VmS>B̡r%K :U\Hl"fP(C1B3],%,L^Y,@X^+΀9O)FPa vCjZ iL7Aw2Fu F'W{v ļ]O~r+)58uLZB&S҆~fiaW+qlj/|G!Dql!TMH1¤.K;`ͺa md@ե4F6#;P3ۼYXګv`lKzp"]c#@MdakpK!>JBT)~qd۱5 q0}d\=8l݂G3^?x_^}Ҕ_pzw BN*EqPxSU5D%FF4bF(6R2FPʪrHx*^jtHxǼ_CP$>c)5Y1ȤDD(*8#\#.<P+aL4x_?#p3/xw^t $AE hxLݗd3T9PAWHUs%zP!0Av:s9Vr<^Kf("m11 E?BL7 1@D|><MWqd2مd[^&R4.ҴF2b3LY1x3Vw*Th~+>UzmYTdY,d1:`'ȸ03m$pbJ4"P4f쵺Bk:su>7'!iSҭr)^8%^cQbS- "G~ rtE0XSY[ e! HHB`԰c ં I6L-UК"2rH>%A8B\]ir}#G!@oW t Z5]$~l |5o_Ve`h%W;Cg/Dᯮ3lwaONjׯ0zlQo3^ W\jq*n8JX3)/t_\"9>p7}@l +KP BX gVg [&, w@N"Ɩ+ $Jg6aqb+B%"8!B'q%!e, ;,8Ds~G3h(٧)C}]Е"y\ޛhss0Z_98}SWx.yMS"of:2 K]]فa ۺJ hH`ff{PƳ".yMZ0D(Q- 0\drrZ8I2dBLs'u)4-˟uSTǷ~fI1OfB$6{̝YO$4.Y]l"AXfcH)uT2XJ4sF6{>+}P A췬M|3<5YJ!fUZ-ox$oSruxfOLtbb^{x0?<_15Hpޑw%\>o\z;`Vgfŏ$G` )pV?#F)w^J Qq>~d(gw3>[^gT2 p[,PP Kul$3 $Zޠ R@rɿ C1H-hszݷG"{ Lzl(F?wg<7'Ļ Rͅ*Q 8kT Ŕ9˼5E^js%z5{Q9GI+Hg;H⊓.!P GL!cjgĚڰnr!-ۇG6wt}{J;e7ϨFRڳWlT!"We8+e)[Ţ{gelqi[ۓv=w;}hBɎoOpעdR+hV+84\aSՊ b5D35W ]8 ŰWˠ(i!H)# +>Y`Zc&xU_`w . Tq] {!B`,DIu["SKRvFO$ E8]B*.痖2V;ݔzQZ&2L;^N}!eف괥D1alMOt4Cda) ڸUxv 9pj%C07`ǣ}+KuK7oE!)!nWbG$Gfњ2{+DQUY2)F*Ai&Dl]yGcuRN/rJbd972bXHXG)RfDc?ukE/P󖒔Uafw꿤&CHj!LD:Q/ Yɾ;I-n.1 c>8 +2E"ΟO{_23i ֭wy#ʶ%ظ!jBQtvy?-}XR }K C/>8o> I\sGj'JR[^$nOY9Wm{G#݊@N9 ktN#LHՅz_E0BϊEXN A_2΅pMZ^\{B\[ 7JQ@0/ŏ[-^q'v6w}[&Hc~M~,d`Mbnͣ'Z0}3~z0 y̌Ƌ<,揋;;d7ט+7Jju}gx~ `<6ALl2_]_\aWBPHfԚa )T3G#~c![I0>'La*74aVRPUD%%Lha,5Y%VL!$FcUtJBr7c9bAiuwiG:Wu<fN¹s#j!0K+R7.8C)9|q~@,/1|9}ro ԰bƍs '@2|(D|'==xВyOx/!߷?4uŅ=zBBv`-%iVIyᒱc;?8_^,WT 7x!yJ2LT| X$Иbʬ X*p SFbS!A4tl5U Y0+|XU}YFd',5PBA$ 5DjM'B`'0r3t¸T C^k}CX_.W xe}q rڽr5fwm4b'X`.!#I9[ԒZvKl2cYjů*̑d$sjLZpn}tR*W6j% MZ7e T"`o*:yQd%벸 ;]!V/B,}/nv}vK5gSzkQ*8ՌBZNPDݓ|&$7&;Etk콠1E>8MA*c<,!ia]fGy. S"n<.k71"('@Yku{q,i*#5FMDB%#b-FPaU*r04a,Ѐ\G +mP IxEv;_먮Gw'^ZX-&j ,S F!x e$|ߩѵKv}~Vy` ѧ}gD;=?9cy0Mx ۹:OqW tY~ G?L6Q~?c'h5)TW09* 𘟍ݧw}±QbF"FKY(V)0@RvdI!̥ s rA֖)uRI%z5ٗ.zJIJ>Ȑ)7O:vsIfD+C3ki)d6! CmzkH;KўK6oH6?G'G[&{InHz:_nBDլλ?_o$9HjEh %"RBSi=ᆧӘSg:#7)ʦQ̅ؽ"mQ5 ?!_|$be4)K_RF/N?]ޥ3ϵHrLdO^q;$EɰTF):Fl:5bz)Ҕ =@nŊg"GB@ 3H@JfkC$ in ?e@GG 9̲?vRʱ_p[A8Xâ' Ąg\pR쓵Z\Ȕe;d)+|(Nز"l 1;g z9qfOn!g P{SfJ%lʂ#U>&`Rm(9 QSFLGqE$ 2al3nN"ͨ^2愽Ps][sAԼhT}FnJJrǿ+Ujkn\9D)[Dz !qزv7XjDƙog7>4. ݧ&S-by߆Kp4yE 9l{xf[ҲNmі31XovZ%_"qsH fGpOKf/ tLJƌdn9qmw?{ž]o+F|b]{'s%ƭ쉝skq5gKjc]ɹwz2V9 RoX+YsL]Q) 9$Z+UyWKf5ՓnO%v?A;(U׷z.?ILîYX8K|wgY疒x#®%s#8.%z{~ތ[9C=&luv({Y*84}nqnVA:o>%LDrɷ[‚#k'[օ|a3ÐiR~6]]^L.RvV#.8/ %G ġH'z%46Z ،Hͩ&JVC ^ZJR&%el|}3 ^bvU%D?a0:y1rSOVhCTZ95tKPg`go 19-A|Vdkd&><'"9!/e(_a 9aeOjzaűlz~pjM Ot -!r0і89U)gmO[}σN mi0uQx!bCQsBX&UsDZ9Ѫ,J93Q;SG!mQ@~| YX`L aG9y4BcC)HJBEl^ l4VMt,z=y-!"9>%DQDe oѧ}Z"hj;}Yh"3ۺ$t}l+z$49+pBt8 ڻԞ<XH.gE^*VKԞ]2 dŻƍ>[hB^# I־ Y֣WJĸ8ȬĸJmI[2|)naք龠TOG;aDЄu4H4h)gBŎa0I U# $9bg<(Ll 1cF$H p̹xP1]Q9 l-e/ÔyzG`0Z0wHB XԱA8B.ėvWNw2j#wB($!R(Rc $= 1 leJA{1=\U֫-NK\OE<>2ҁtH Zzjx׿]! 2B8ZEJӈ'3X؝Z#8 ʴ( k-^ l-_&YkuljX'x&8PB$PZƇnZԂڣh<`*,XG: K {x<#e{+D#p< IgVƙ93h#pй7Xcڪ@^9[YՕ-|~4'󇻽! X+U 1l<0D;Iaɔm>t&w$lN)b&fęj,{_p8PM` GƏa'u^^\`tN.gKh)8 ,oU _=)ꄧ$Ȟ#!Er == kWEZIWWKTv@yGώ Q G?OMZo2~2>i N6"KR!JdP PN2Axh Tـ}Kǒ:5[$;#e19ٛVMn} ]ys^!7WoN{I w7js^{??}-p`x̽B!XG`pY*;x:Pr:֋|ꋇAx3HbTФHy>y!Dyq0aScD@6wC%X:st1p¨~s]{xw#f[Mn4z" imcb̗VȇE2-ho~"D*v:[_Rش-ٔ(rƃA2/yH9f(dfhT= (bx1MA~}X^һ5D=:GM Ėƛ/V4ᦻiFa4< cG|7O 鷢-/p" l |ؕ@nXx75 |h&rO=,xzQh]䟷ۏ]>rv&^|5Y7Ss~h BNc3Do6*2)= hflb˃i^xގb_SjA"mnmlzuh7P$Q |BM*rBb"B_"^RR*3Ys@T ?:^YfN G18mօ49ة"nooGέ;[i9M~9QE+T !* RVqL \ Z`)"U-$(2rvB]Uu&%΢ 2sq+=I(1^ ^ @ɵZ(WXT2 m1JZfCiN6!}sIzXm>U%zwmo(v.Lڱ [ 吜2@BeD[ZRcympG` 1rbbB HQq}%>{Lc%\Y#|< f<=UGsf|=Յ".ZZvgECs"cwf7 O }FYФO{nCgE{)8A2Yo d-xJ e+M&'Ɉ, $%$ EBD-:;۫nM!j)t->rmX7oym&{G@"~7ϯ?sӱO6`"{O/DVNlZ┸sZy&N'l6p{oG04 {A~D$4 h"9rRkvK&=2)i`hMPl'^H!$R%LolWfMs`&~ό=6{|W/Uw]d_h O!@+=ryg 1ۼIRWHa ᮖ-2OK%h^ڸ~Vl/sQ4/P@yFE2$ 3GfU{5/ps}nDŽӅ3fY:foR{QҘt\ ޥ8W!iI`piOE:Af( ۏ'<;a9ߋg0Etu;ȦL  xւ5 N92hdHe%g_ k4r@}i8//p,ϫ.Dȃ#Du]G*DWy׃Pq^-=xXX3ʻ( y =NĆ&t{>JR >stX̵#gǾ;B6,t.p"G@o=CЭ6Tl琶=? bB "ᕡC\tL0vtY}zsY}˛YH(VCѥG,a1wg1!z3}Jb'6!ɰ k}_Yb^^=Q(\w\_;&Pm4ˮ՛.f^Տf23ܛkb+w{O?0a6_zo c.oaVNxYs_/]\߿[d2hzrȬ.+LWKn$@7s'ZwnnHKzwy->Q8-1x奻x!?0J@e)mh !eP{r}/D|^竟 ߪ5#"ʝ˝.NN`C;gD OT+?ӣi:"c;@a m!cN}T ̼(^M'aK ~x{xPj)j19)+khvCqbQ1dBIWQᵛ#5]2&DzO;f KSsvkd&;ԭ]=)pJ29M ֦=;>ns9`0 EŁMDG`Gpej,W@Ъb%6R)]Fr` PpS.4i28[jF G˪Rp'-4AfIOȨ :d#>i9`2@AJz)(Qtppl)P I~ʓHOkAW9ؗ_Z>/;}7IU-Gi,JEsYJ-0@Sv &`Kl朣RӼ 3wQKR!iBi9JUeY(JJb3T7 J(!fh(մZ%T:MzIZ: l'6l-7O+Y@[]ZUa:Yi[]/{3y+|eRw+s],NEI^Ȓ-qGHa 5\ds[ \ t3x=(/?<{`{~4=qb|̼3}Xen&gbC seČ'b~-/W?_o-kڗCPprТW2%PY5mVp@[S~dD8+܎慙1K 3>⅙5bavbfa@t[IK Ї)J#5 %W''&; ݰ&'Sa/[1je%M{cĖb Y0:]n9jLiD%kjȯ3*nzC9ǐF47V5ێ[%L^ghٺn) ?U@dˀ!DD:1g,8X[Ilqur"HI1ťJScׯC[%LOfH62_7\enJEa/] kօ-IBIڍ쵳pmzH"tَ lIQ!}QQ8(q˨gO[ $ECq |MC$Ⅺ{jA`\ rhn#JNJmKZq0(4ܩ B$^kE/ܺmRP/flVSp=/dqltiL~0͐JV*8Ht5"R~vF>ce@1i3)&$gZ8Nt kۇK6p/93޲%=6+PBeP:#(ϴʩe@SZsp}ry~ex"YՏSE$ez;7Л%uot43ar<]Q} jyB,e!{τ!A:iL:q^E|7WUhK-=u~(>qOo=-Yc5;I&XX);MYWp$JΓaFf˹?p%EH?ؘ!X#KuPJWgnH9CU$BN ?4P!hj!R5wԃYȁ8/+Q#LIl78gb:9!ХCOBՁ' "W9G1:qB!᧞+x|#1?эRDL' G#C⵽C9OfGwJwVY>azqrAnu*^oV!#*VK ?M†m SIj衪щ*$d='p "2s{C 78Mw<ߡRr'?'b-:TlP‹2L%q!XBq(8Yco'46YHIXr ׁ-D2GQhń vxAT\?H\NT=z&]DC ,)'Ԕ)y+X0%0%%՚ٻ8#WXyD^f_dXzBVw8"Mf,7]};lQŪ/"2#ځ45ҝB N֞Ri0r}h!ką&&bL;Fi XVPdbOyMy`d  GN)sEwv֬Nh3l-`7 m+{ øؐ NAaiO QS,$nTWk5n&.aDI3l\:,$»Ʉg=vֻ݄a_,[*N'{'^4{gx6ei+;$dh!k%NbܨyM!afXQAoJL(,-Ȟ~h-;Xz~ 3޴Ö_oC_4rwOiYf[V;ȮQi!Er ~%5}fG"kYRU-Uc,䅛hℱoU~_7i5k>+ׄTuto7-Ej)CV;E9YriP-~|KY­A|YU{e! KwlԔ]d{ܤ%-/^gu[oe,7tCj]"ZUN3Zr BKAIZA17?[?{gJij# dqקӯ(u<{Tݔ ӏo#߲ߑR͗1S?NW RtFڿ\$Ni϶|/ZSRŌy{u}A!!XS^@*:>UkVJs#8hk'.fDǫE_ѐ =+?jbT V- Q̤e}C"`X*C"t'꺶&b UW tg!w:V]6o1{(! # ?YBL:fCT^":$߬:(vv/[ۺjr'$d 5ևsw\Ws>Ba$ -P5osۖ} 4aec]{yIn˳E2NKRqrѩN2LBAe^StNOі$Lu)խNBLE%"%i0TхH'0g Sd s4 #Qd ##ލt(g_yLp! EY罣r:YNfGCE"[L\_E1jl'hlT@3i+$)Sҵ*sXQ@L(G?tQٟ5"$QӲJ52w3I/@Pn$ WY*%j 9C -WJou >q" I35d>8$ H.!&N9H/C7B3Y_BEK{|>}X|Nb:q*XqA@z9F}?.Yi-/.h03k hqll//hD^/ 1:/Ћ@#0@T) B//!= y.BoƸ,%FyMu@CPl60.q'ϑܡuHvq4c~m\0VVn4zoXtwkU힖XvMͺ7uo[Q'(,j 4HaAH\"J^Wmq[MS*FWW@pPؼ|XwkA$rDJJxFs8J"0:r/ qBhJ9:cƞ) DcRGaI* CXs)8AC,Y1pe3[2k0I@e[` @(D yv$'Vwwm/T.JqPmqP៘$wAS7!zO$E#4=٘Qdx.F!iP &$:"-(d\dw{`HN;M2/*#Ir`:nO$ >k'fߎĔޟܗepdW] D 8 !P`*O`&؄kV\rk` 1][O*YY19vRM=umTJ&[2ip$pq92$̛Xdŏ_ 66eTĘ`\r:½ Fc b BQpI_Ia,@D-.8<>OQig^]~j~wO/W6{uGj19_ )=+d3cNrG"N)0|zj 9%Uh Z^y Qcp:,JD.^,k:=&voÛwJSxz\^>|?[)D-B@ L[\-68'?ի-OCKwcmp$EkTtjee~TF1vV[p?p? [znRC{ ~DNmWDb؞tp~O(r69&4dH34OaXx,SN4@eC~,lEc zjF|WTE׮tڨckb _Q>]<5TbU~x>x$ʴuukP50-KsSc~rM}{ 3BzZ;߫5ӿSO,m=Kgݑ S+Txwĥ ST N]8$Zy~TcĚ$b(q0*&@L 79vG f=QWHӘt7{T48Fc}Dg*X\k]|°ceak!u+c9C&X[S#QhPhӎXm~hU_j ˗PjΓ?7iYo殮Cl:6S++j,\ bXs=o_xfL/ ̠|1.[2!Pz ʲ 7ܕ+p) H2A55wy[l- YδA.EB2aOD"b).ԦV5$F Ŵ+YEXD_v̥!8u'ZAo>(855\ x/)3$]G[: {y8P2Hbc 3uX `/DBz 5NISK)=]-B d{&Wͯs{&[ZCsA1 #q3;(dF_,e–e–qŽe6>+<s+PkscΙGhRWkӸs[cޅuo1͈7# Ze![Qun)̵NF|EJjZTr؆,H[KDFCQ>q Bu?e|^(r6OlhE|7h|q|q *U+6q>w~n~IbHazNm\[GϤ$i) [ [WOnʧu?EP+*%m)NeChI_Hԭ+ڑSS2p;:mƧ.\'Z!x;`3%.)9az'tts ^P@{ 5g9yHSMMR s,ejKq>Α(~Gk*| 1j *Ę+=c3~D7Om>*J ފ>ߊ2u#z"ߋt8?.#b PVXk,۔̰cRE9ƲRs[IRgAZku䖚cJPe>\ b4sG<_U\4vq~{dM4Ǧ0AG+ 8.bc:Hn3FARLQ"M4Ǧ*?N>nBB -}Fw1LnGym y&eS|Ȼ4}Fwsȝ̻Fz!,䅛hom+r]G5@>IpۛO;cI)i[)䌃82UY*փ`:vth ƒסDb@ E"AZy߼{7}cHu?/һvS0z>87?֥2?), #XϋrA|sWo?|pZnu-r$ՔPɋSD) 5J*]Ou׼` t'0$ ?dAAYѱ>f.ϥ: '/$qV IDq d$YgzZcQrX@}Y0ܮĬ_~ DuT&0r pfsb%ǙEcO7L@'OBV'm)BdYV2z)\5~&q>|lN)~#H= ^Z/t^lğ2S̈-}[=6>{3˔dk\Ϊo민Yo|1RD$2BN(P^Z)bekq._hNԄO]-<b 1G'F4< n9,nilyD#wƉ [?\~Ɔҗg0=rOo(xyNvI N8SF^5l3"A|9fOL݃Zti!-i8ҽiyNOVo aOK ^G}Z|,ן/\d>I5~?"m_߄Ym)kwDPh|޷ LC~t* /u;J %/ bB‼aG Y 2r'q܃-XVZzb8yr(WwX U?n—I32%RBdB 2B)WHZ `f C!W1*"CyN͈EZbCI+ȉ)MI"itF tuD`D=_Y%q!Q&icP # Ce*R pR)b3R"H  Ɣ8[ـ\/0gR(,J"zA\Uz D2Z,yAm7ڒ\U!dD)IYV\T*fH ^1ܒπ uW2gZ]o=qt+xPF<>|6qwe&5#sfJlWF{`ͺӷ_)\I_Ǯ7k٪譆f_ ͐W-W8n2"齮^}xgBR_]cnʙSD[ Q&quEXfbM @1$5RJ͍TsgA76I͉$[uਞV%K} 8j 8/>͒+k~w#a-b"8bl?-ƨ5vR*v͒Ym͓s?TA3&ؔCLM|dAwv.׎MIUG~ѤV~6g[[xg%J+? ZJ$U Hk],: |u;^寺:O^IQ"*RRmoTiC%Xc3}pDNҶ y5[fQ]ok.81#/CW(PV2pE,BUm'-xҝpvkםEKy Ǽ(6l^n|;cIiU=*Ke>[V/le`4|8g-|2eT,)aT*mՕ"BhLY4gߐBha.Ri mw䲨D0QMb 9g~E mˍw%z#W'Oxqf 9k?_F߶-C6~R"׶%LVXf8-sAҌœPeLr*f`l#ؐP,D{^qYrBSQRIaJ D܋ @LT1x{6]yPą8a+@xcI|jnk N6SN[h  vZ|6*4'Yg>fRB?տ4f[aY {IeՍ>7bNj[fo`lTU&9Gz%FIëv 𒠁%;õs!3(wAahp,Պ{}%*ok*qi 2RUvU.(ˇ~i,y_xXXkco q~l&l̈́~2:['uƖ-ϔ}UiS|nϼֻy4Qb5#wOX]Mڐ+nVf)omXBK19rjlj)(AHXB-$3;M`o]0h'hbucXnD< vSFrTSDZ r;BTwz2ApAgSЫQĂ^% _sC<=8> ѤnY q=ςa1r0D *4)6NLlzxx66Q14#9BLk/:rvu,@'ŭ0}ro>M[JhW]Nj{0cbBj>kPwoTpNv 1uL2B툊cK7;E^P{!u@:H]- YM<߽cѣ{ R(VHႃ@2AMSB%iuؙ#zVA/oP tazFyg69~jFL!^`,wtQhO8x+D֔DɇM(5拁}G69ˤu-S[_&B$4$pE[cm]FhhiwEjw[u7CA˚N}M{DXCzz0» kx.n~ ǻoHhuxѕA_%E|eǮ7T(`da+a sӑ+lmãA-! ~ V@cAHwO+:'N]yK}`X<+}`cgڼ%l'lR66Z>cOݽ4xb_ujַ;iEF|6FAbA$e2εũTSˢ@Q,>E*XH ? ti.DڼV± ǝжMm\?G;!IC'XmyT0E)c)nIĄJ?8نS`i7"֒l8}2n66AAlt`[D/_pU?=8UfÝ 1. P?낧w1-n(2$EBAI0TQ(JF‚Lal Xe6pQ, tt3Ex5H<BQ19^`&J'KT$̆;$*N>D.'N>蔋6uGEvE BK?TNRo5rB w:ZtP}]Bw<]ưnC ]S9xM5Il>瓹vzHӈqZdAN[Nڈ2:8_˸q) $Bަ"3.!n|8x E`XNZ1(qm2y cJ0blxyb{apz"dnY5fg]XXzy"jb=67__&VƯH^_H]nYnXh`W[Ϳ" wLr]fEv1 H B\[+Q`(YcЂe<F ,/H)) p\20dXT\JzjkDvK|_V/! L#4˓.Rvc*#Z1*iÉ\/;HKLr(iR94KSRHd2%8bR=ϋrQ$S>},IZnuI?dCAYRڛ"]"S8¬82LD#v3#/P<>G~+p:7'lĴ+Vpn/[0Ѽac)4R4T-x#G9̥IV"֋Cj:0;Cp+.{:uٝ^/FJ\CSO/Fѱe] / 8° :g1MXgFeHm"ijHp&9^5 xKaeNUVJ8 ̡ ~~hs6Ջ71hK'û7܇hNj^8-V~zkBVVy/cqyPW~}XZ(^ٝy52gT"< v8]TƌY\W-+o֛peLqLKՒ2AR`3i>LK 2X 5PZ 3 T D@'4j< ,Ɔu΁wC .'>4|<Gn;1g;5:SzA;1YOy8LuJJa8[k- jxxoja$ZPf Np.-c6B\B!DY&"!"u#G݀jT@l2x(9XI4Xm5B+/!Ӓ( ,u1"0z- sJKcd&3` Bm[vT(QRl) |$ ťeIwnxl3 aYA쀭/j{K 2ɇ$TR0Ϸm%yc<,`E}S:|yX*@ݭ),f8ĐyZO&B(jƳ(.ImZ>h6fy ,]8*F\O 4J%Jd)tIy.#* (: yވshĴr\-g%xIqYpmȔ+q(]E NyjJƼ%-faaY]O(sZc^6"u(\[!BM=A5>qh<[E@O㳖DDI(PRjduIIi)P,(PNWkc&]umUYzE$ ;~}} 6Z9!I}px2^u -jӏ}IU9ş޳s#L#M4F^`c‹j ~*lǻ+;6}'eh-%0hb@v1Kž}<9N8h6ѦYw &y<)0PbA~e&躆CH†,8'Ah k Ԇ0 w aýP/M!lڊ槏7\iQJ Ftїf2:@(@:=ZZ!. :7=n&MW_w Ĉ,Թ;(s@v(/{ad uy9t0H/ʚ,Nl4/t mlm0/ex6]Vk<&NHČ/ [ߪJ)C}5$OqfKx쉘wߗi1΃./)y&|(s!ߺbvV9r߯+Tvl汭}T/bގ-)U&vvW%b~|t)Xݼa=?OSbMjF=ͬP(P=PrsbbYd*g/Ĩ?^&U`zE~JIo _p+A _&s~N6h6οB ->"ׄ)j%xZla (4uicpz%1~v"\kOm&D{؜>RJ]J8k;^dpĮdHn`6Ay-큏PN#`$SzMԹ`vްH7 pHwD)i2-p8m~Y! `?y]bNt_^DGIDGIDGIDGUvJ(v4b4[`RB$ Th",PK| gviF]|o* $+!1z+'w{t5rP, N+!XZRS-]9S@KadSylHWJsk7'WO|g28~Jπ7w6&|NfR7bJ's: ezIݢ rAjWgVz(QAU4&Z ';3Hfsl|̣<_kԄzםf#vuJKN j1ew,7]yׂ?xvgCmM}nuMD<Թl_;JDᳮV;9f2ѭ(߃ukorãGr ^#nک*P2}3zLݭ .8(Ol=Antc+q|-.rqƳk49-^I yZ$jWIUTVi/6E+e{c"\331'0wq4QO!ogkM@sP\i}-;aȔ}H5T#jlZ{a[ZFm fHAiz. P/x:qwf$)Qۥ#7צ`&h`9mD4WWog}~dq=f'X( $zЂ,R. bZr3~̀ͪ66`\`Y %$Y .+9`ueVpyG@u}y??fl;^"t+O^^'5RCƀjuKק :o . [(C:;Y85x0IZWW }Pa3P y}Nq>gMy?8GRt뉴di~ho?>ƈf`sbT%=WUL.{N9N-sUjM5(9쎒;:ՆhRs9Pk-StDb BL9F@e7,oF ˛|oi6T(nTDwET4߽To~7JߨUZ#E${e\ y g5>(Q2E(4REj53*h5V!{zLQ2GɴUM'5 D+)cQX2@ 5H FYȗ/j& _"4SP1*=J5R}ҧElc%8_O p k"BKR,h$@"2U7(Z3O-e&HT6a`Z 9'Y $Q"JSRFi,Q,6R/)păr$I\^p|3OR[Дpl%K]y)r BTCJupki@*S&\}0?'K0'E%:Mɜ(ڋW(1ѐĦNh#&Ra#`Ov] _^.8bnzyTيV5Pj8?:JO&hvkSsE탛L\˧nq\A/ٜHR߳?<0Y>i|Ϡ@H3h8NgKqxwu~Jπ7wtx2~͒uLDmK\4>~{p}=C:4Ԩ/ԇAiPΈ6%UICi?rǠ8oj4Դd/U#w/^;|k5f)\t8Fhk #Ε63ڛKJIή3=`-';p!ۜzz֡]`#@R!3 (&I?SBl<>{׻q|7Ť"19;OH=ܼ% IW=櫗rv8׌w=!;=]^xOTpU0q(W-oMǡ|?}T};.n@SuQH Aqƽ',(b4hGyI~&,Z$qu[e6pIt&i!D$CG;0 \jp%YJϸ#iҕ+"mrYps=˱J72x(9XI4Xm5B+TspLK΢.QG\ZwOυ[8Lh0Jtl,.5 T;%uDQ҇gZܸ"˞o#s ;~q`=Vf% ٭ZR3`g4YUX,V%׊CfxD@["@k*E8~$1?8#Q7.biBTI#+ݯVGQאz%> M&Z}w-@bq&u{ơv>>}r-&Z5]jØeYu &;7QY@#GMT 蕦?댃 Q'zkX48+hBbaUó4ϣk-n0GMo;xhnHKqFk瀕`3^N;Ɲ1{uեW\u^mW>Q:Щs#8M_^PMKt~@r)SO3[|cq䂨zOB&^Rc|={%a蟓4 XFUwqpS.,1 ̈́L\-ԗr՛5S^J#1b6}HMbaNъ7`ULݝy"2=ཐlSVJ,l[Ȕ87|3`,v>h7W lDu&ڭPRE=/R!!qur*D_/ [i֡vsFt\hݺTܩmBo~zM\NngwyuWYP[͉'wR$I;gR< _0@]<YK)2=X/ߜS fѴ3')b.JO5{z?#^HEV,SvjmymDJGWp!J1^\tkat_O~"}+jƸ(@pIiň!>?7Ԋ^\'NFy3"oAu @\BA@po}Uyy| ;tHZMWUS`FB4'<1]ANs t͏X-W*⹪H>%,f}(/u"eD{)T\EcLb%'2bTT)TbJlHJq^ ^<mmׁN^Y[>I<|0Kfc"dO*ާh ˁD7SdjS jOT"x_ohZnG'rTP>99᤻̌J2M $F$l$XF+ iQV)4Kp'-729G^拑?8Ev|;c d'TdipV[]#M^Wؖ)(PMt hp5LN<{j&^j:izGRHS22<^1`aͳ q"Id'=(1„RqˤL Rͧ&53A`$p'ȵQ-I `%lI hS$AmH" & 5XjbA4ҳpb=q12=N5[ `D.EX 뭎) p  C`-n5.\TA#2"BiS"""cx`BĒD(\`Ӥ59NW?Z~:tA6Eq GlDH)Gi&$QItDrc(/5&J'd„+ tA\Ґs; '1d%or*|L!u^d39k0PK( $NzW#,RRKmz/<ʁ‹]ZE!,B\&H D2$?""$Tb6i[][Eiڠ (fAHzTa.H&Hg4\\@&.ei[fJE4XK$$&Ħnѫ [[4&ˏFZ]dbk,-[7@]@@y]UԝdBW09"\ z9ˡ zD3<|NmEYw1߶m rK'!\< ILqlP`,b(9P@GIi+z@-CStywEuߖ[kγ;raqC"BEW9{z/=prb DA#HeuN%yn5NX}r6=?8a]8t~l?goe޳3dHQ/>nbqEK%Tz EH]3F!כk(}eG-ngWQVapUR'Wk<-Vv?fîGLn'⣚)_~5I{Sr]ӆupVqT({_pno-/+gd ǸX . r۶is B@)j\h{D梹;xpl:(Yv],B4IJ'gs+xw}!ԣV{$N'F+k3XIXa\FZ`zXvuHe}F[S9ڭAMU#7|6~5se~>޻`sgp Ψx:uZZ -`# &$O`S[̧ #)o[XMFx2dDP3=g1{2dR+VLjUTfD~ų.B.Biutf\Nֶh;սb"X^#שKWO`:C2`ڹf v͊M}.|.N|o- ]rrn:GgFR(!@R*!LM2(ϖ )()L"=HheKVϪjeA~ysM ,DZTEĝd(WL]\v,X"(D(6U>bQ2)1R)_EDp.)H1J+x.mMٝnImn;K܅jVlwVB:B`p.bF 2+dTB ˆb3e73f(Xi;0vY{z qsN ?jE]JtJ!S*1Vk@(3C#lo>r>r )pDDX'"l(ED5?j6'VPQ|~$K{ע5crh2ő́^"-XAc}ĚBޣKqH*!`.~zc_ +_k}*`׊XMF[k 8"vYO/ # %Wl#1 ="ڳBW8hhi-ǯ5!ku{.p"FyQNΫqpkf9uIYw `qwAd^-'OT fmI "[~vîBαg7lz{v%@-O.nmzp X0n7T (ljY1)A2(I4&HRF2HJÔ$iLADE$(F!lN۹LJؓgP"Zj8I5vT#0F\dѬ"0Jw>ݣtu;{1@s=CZ_Aõ. aj\:jiGU#RBa[:83hGϿTn,a@: A-/bkƆP!T&d2 .LU"bc HL(%QD&3<SP$ ._ oʗJ4StiSgUGo9)iJ V {=4 \Okϫlz9)"/_GWF*^R%HdP5;g<&I~;n wYĺoP֭Cqmz۾o_1rf9Mc!)"{zՏZd4MwCV0/lU~ҪEQ#L),G_FwV=83Gm>hpf>gުtY25Ÿ\}ib ʥ~ -O_銪ׯ{ ױ*_88+Han16jd#X)bY5FB2#M ~2<7`_:DDKS8\xD\G| e%+qDqzQP7Opp uJ8*)Qi= }L 8  һ ޵6r#K6i?3 @`'y VFG -[Բb4r7X*2)5YW&\ʭKjgK/087>撜LjI 6E6666HڨL*X{11dD%i e2Hd$T)$96or!Mw׆* "j~Q{oy)@7Ao߶껽cx;f]QH x*?V:|_ln*^\T{N8JJ/صw,fطS5--̈́y! ޼Airͦymid5« F;b߰8C'?RBwhoKЏ = HkjȻ1-ڧYژU֋ 4aՆ_BAkm! =K.f^~AVÉ+B %\tAOhr[ԕC/@>\>]_­3D-| SRӆ\>%TŪy'[_qvz F7SuG:B']a(xԛ i]ލԴ%{}=W p+fJIoL*RѴ+;b> I G:,noWEg6~cVvCQ' L LØp RZ(<^6VG=NH=RNpbㅖSgnS=N:q⥦TqqB %Qc'KXhNrC/6l; >hQզʻm6uԦT.hs\KY-f#!3jhW/y~>q:o'*"=Q!1/LIKRLq7w7/7܌vobm1q&T wl:2OjK9^}NxsJZzg$کwh|1p:bݺl!@+doVC|Jjk^e[#|H% KLfa+sh|zЏk?kXH8W!S[?4wqQ0 ӄ8 c(Ґa & dikpyv=DQڣx|:z>X$eM_[b @=L`ᕣ9su9(,ba&IP\ihC. qKKC\!-?B=[:֚*P(s /B0_*RԐ ^Be6$bz6S@ -a9/;|pu?jKosCj zg2yW a4/5r2Na+iVꯟ6o&5& Z'DXS3e)JaG,V*T#f`Q(aifpbWI}|_t[va͎WսM /?]_DtpZ_!8782ǭȮ]AD9^7d U+cSvΧaB b]l9 ymٳ?뫵 p뿊z337Ojvtv蟼5wO_|'#'naG ;n [̣ fěxKPr (L0!ɸ G\ɅLnOG+z`y8Df ${UxF!\\{P"Qe: (+ M#a$~^U" LEв5PT1o[R7(=@o@-|f SR؆fygW8Rf= :h+c@Z۩ϓOjdw=>B@JyY.+p@؝MܐLJ[,|ZD)^*>z !txwۧx>do<*=BÍ0~3 bZ{'m~Ǵs`@ 淃.MLB(I\}}y̤3;Hwy\%15~`eu =f8\P ]=0 dzRz+@*AC #)X+rF1 K[a[v^8:,&:rYt͔];/2f7R/)QL5bHJ,`KMU+w ?]ME܊"Mٯ!/}!QH{s ϰ0Y}SQK2/ n,t] f%4\x_6 yyň&A@skA ed7%P$V0Ja"F% Gk#tVʃ6-a<[Zr%ZUu ?}wVR)ˇ(Qڄ͖l|)*q6ݘa|Akpܓ ռ;.us{>HN[P 6# 5Y^bnsRA+O7ŧnԺ"î0j(Ӿ(hQOks̲϶Ne ^"rٷV$䣭sX2)dHMK ux*#\H]ݍV*Ҏ, ڏXń{k[VDZ-&+FVyL59"{K#{. #Hx)d\vwKC}7TaPZqþ8B*a-PզB \]Elt j7LWSBeyu)ejf4v8u>T72N˕Uz~28`44URJM#um:Y,T1NJW̢Alkss6Dp@+Mzy?~X/ZړJ ʯUփ- ;}>@jVꏳȸiq\|=Z3uS.eM&m.&˛]2oyDmg s: &;&\o((TB0nmsƾ\dw'yJhFRrfw&Eo]!lqDkNΑ:'vRM BtASaRI2T0XY'.>^Ca \C0 È(Tu瀧\DC-v\_'3;ګ;/WMVtpx9[lF<.Wi0MpRZNbXS5MOdYR/`q1GZf+]BCgzO\מ%)M3N8PJKYj9Y3K=1~NU1~g͚]!cbU1g`IM Z83GQɉ0?s"V_WNA%P9,?n}9g'A=m386y\յqKa-&!gK9:gkd_͚vDC9ٴ8W& G#<3y3MŠ73__ jmJp/CR{c^OXgl:Cni '>d]{}XߖWni訽F3з2Y5 ^A mՠnސZ vtכj`>ٰ!t"){g^_^&7 8$V'Q+U74;^4 SDD[zCL0% 1nhnhFBCk2?{WƑJ^v=6F!ҎyN#␭uoV@6ЍjJhTgyTfVV%;[КJqtw 5¬ɾ[64e7C%F' ~}p-q|%w=}0IR=R<\iIiXj@v_:rYvtww \) I.JR/:S^8h]V4tK'Llgͤr"Jb' r Dp!FNXƭxLJ``bp 8b Q )'+Ngzcn` B)b[LXDM8BDG Gq:Z'H Ur/ ̸Gj*QY/b A (GM# ! Q)z4KZY)^:(R##n%u:whu %S LtMj|JDIϓĖRmH>˟sگo~.MBc׭oE" :x'Jitxb;;Lv7Li6W"%}11I֦ڡ(a߶LrLBhҬ/ ]ނ}~a EA{l1V8);a1$U*ƌB `bS2Ƞ[Ӈ|?E 虇y.f}?? (;-D<B/hȈ C95L1j=X e] a^IЖj0%aeV8l +J yi9Dkd]?KluybE 5 DJbe4wP.`AE+tZՆ;ŊR,wYO\{W"'}=ZId8~ VtF".Ro?4e;0O /t,9;02W&=p@1>mR },cAP݃ vƆ ZĵV׫`ݥf*M[eY 6lJ'lMt%$ &4@}4 /eTrP#lVsS\""PFn``}x|sLw3M^> s|׏J{{n+#?(3DaCD˿ɪ-u'}9_IH=ֻxJBf)bK_Qnk)\)p 2+@;Krbzu Og"vC4J [[TR o)C I/0 CˋU6*^~\p x{~x nʘfȖLNq-w7{o&w O.cbz {skpl⊃skq#p+ ͭf׍zzlzKL>hbzcvȲWxsTrTTKRQ]9;*m8)<6lV<[!֏{OYsq-f$֬޸;Մ+LOJVRoË* <1'tB<T^P;rͪ$=za8j ]e0L+>dW ?̇`Y0NE-Pke RݳͿ_Ov3=LA/߻,%ܦMsn}Zٝ|e: bz⧌\~?bwU%S陊R* + j\DȔcF`ݪbPFtcvx [֭ ELjsO5jx#Zƺ/x*[R֭͒ EL DN>%j4W`ݪbPFtcv4Tw^Q]Һ!!_V)έ2>͙5GH.n~.%MFɨ`(aViWMɾԪ ŘL*>jh7z 'EKcpP^5Q] %1BI'4qB@88qެ6qB88E jB88NMdWM8fdM jK'x8VMЄ_ 8j pe8 ^9$Pӑ|w}x|@8I9mڪg{}z%+?QO(+8:89Kl 0*pp #\WLC1W(AM7<jH($26E+!Y»電i3+<06m  B`FņH㣏﮿?J)e `|m_°E~?Q1+mCfypg+g '2BtZJT3Ap..nЛe b n\+YJn[cI6!M~ZBr"h#1|"Hs" d u l$8&H{/QEi"3u]ٶa484N~$uInn6x~vP71,aX7H豋3e`$:6=!gJ!yݤJ.Y)ry}m7߇[Ļ͟.)¸Cfn~r&3U,d]1=Ri}tAQ TxĒVzI.`0&9+ aJ8s:b 屴VD\U_.r!w;2Gm 7`M>=مU?U^bvdJ푼AJ><IT87ADad1*Q iŴfā8PꩽցIA+bCK"G; ZEȢ4:. #H!XY4Q* y r$rC1'䌍 sBqmArBXÑ*0DᜍVCN;4.! ߴUk 8Pf\vHqn6UݗиO3+om83?>[.Y᧷O"FK2\?9`C[?w̎U "ڈŷ?i[5gBn[3a*Z|:{0%;@#Y+n;q<֑ aRqG8ǔ ~Jn<`9_tP0B3x*QT\8\g7Wm{c+c=Q,W?{R 4aY+ (F>H,}xpQw0w :Y%g `gM;(D?{fm=QLke;x yۀ>Jig>?8ZwyqL2I #(o;_\9 nB ȍkM]Wq1c^kh (&LcIdKHL;0%^*C7JvZi6.J(Wmru!Fcɹ,׆ \K$zfҿ؋{ QZB0 !P I-u['#">pkP p p M)sA*92o.&6BBRG+0hH ^ X~/&" kjLG % > sQ`mo3c :N៖^ [eJ`AG~!Km=z1BF@;M)LѠ| ZFspHBeJ@OEi թEgy\aF ĞFKkup[]*#1M%'i+O6IVw)D!Uͨ`mhۀ^)(1¦R1CƟ |4\(]<@ \'y7Z(>L/G/`6q~v~Hp9=OKdd8_aD~\^\`";Kٻƍ$Wz)}tD?85G}eơ)-4^Lo@E (\臖L@՗ʬ/o~Mկ#H< Fh>ڿ"cSXڶjXKc2NH\q|)f%yoUq,$dfgaL(zs޼Mldy'NFY+.7<>`{Q05=bd o6 R#Ylr\fܬ=j(/#nH)|,]\ n8I\k9x=` ӽNV7eW~G)kSz@ܜ=̈́^dzd}ۇj86 dj55Tۑe܍ռb+s Ѽʗ-\{mÕde20oHC^zED2غIJ`|1pQgTn=a̺38Z&4䅫hmT!bHuaq!= t`1À:=L΄L}b0 R8 R9Ae$*$:EHP#aRX(!t loɷ.o"XqcQ_֩aAu^E.}VO]OF窐+r0c|ЩI&n ƚCX!tkNY)ܩ$y|M?pp9dw9]ȨUTBCYzW E_[- Υ]+&ylJq͐º%K^Q~o4ZdOJُ.d%m؝..b?s)[+PQn9 78՝ xMKTK4yULx äʵNڊGp5HXlN]JW(q]~WT-li%v'q׺-[:**{$ZH{WeV(2JϡZI g rm{lw4úl 72KBrF!"M-9ZmJ}h;]\ek%iIN9J 9nTv>TVka-s0s)ajY*}w"s;SθKbͰ7:{ k1ɾ*:՟H J'0X\ܹ6kTRPl7چ*)n`qY1VGZAvk dyk&D5U J+QZ̫]MKe_I{Ϯ0TE* <ѡ05^QLd9L >D ;_L3%F+6.nKuiAecm=r,7z!(#Xi,`4 2@SEu'xD  J""Tx"XW>&`B,l&ZtThNrm:./lb#nuѠw {S,֡?}`Tiy&09O`J LB,iTRưi4Ȅ$P 3B: pL@]`Ci7D*:Bf]n`F9ꌴعvQ,g;C 3.$Ι 8~ Vqӭh\3fot3ٰ|5翮Ow&tgOw&tgǟl$`{ChXO!1Y`dhbB,|Tk uL -9i:ޭiyk~ *zs(^vv ō>㧧{þ|-}|+n# x51co} P],WջuFR8%0GF`r%Kv5zoOsԪkf߯SX ƔД $`^lO AC`!zaۣQ++GOfPO h=QPji:3w0ӕ y*Su早?jpTw+o+@OU. .*%ëp!"Pa#̓Ė6 Y=N4Xz[_,֔2_aR7{]Ml맜-sSkMg=Lq0!6lf:&e^՝]0/KhQ3$IW=5݉!ݩ_!#nբ Gm?9c9㡴$#\ >u{8jB 3NZmiMh WQ ޅ8nºb:Ϩbzt ASH6~Gքp)iq1=nB9拁>u12[UK|ȺIsOm3*yE@uo랩w]gڀp)--KŬDKiMPL;JŶ7&g1}~w}4eNZ^lPٿ_}vE βLK8Ah6r"%h{´ #ANJ(ul[%Ni *᭡y{ۏWlW`2VfhBۣ}O?=Z)Uh=cDE;64DO-K{OGЎ$ipӀ0ՇU`YXO2:Tq6aDYð(1 LSn"WX+a"p?cqx@'(Dɱ;2_#DIS×s 8[i섑MxJR,ZYJ&3~YcPͤT ,\i}DKQN;T\!%}H Z@kC DYh^/SL3LKdDAO~K(GQ0<X&Sw KKt3ve aF"q"r,&WsX=W[fV2C!+&cHdŌYRZB.7ȅ7"@_S ;ŝ$(Pz}yNl ?QJq^BQ%PNWyPXv'zZP۫PXktNrk./|z9)OSDͯ)_| nm5o5܌f~in5%k [[ bYkF䥹KՀm$qKxiC'UEk!{!Y}\>F5eV;flVT fb(m˖pEx б&M=.c-Ao yL*O"N$FNM6~￱)MlI)3AXQIǜF UJqvl)>NLc&?|G5w Yf)`! #z>-fas4|/;V,n&7&Щf+++ɻ=?7y;41yΩ!1}UJw#N>?h ᒴ[h5el q1 "iWZ|kLHQlGGZD3-tou-d^C5lnG˦PcuyVJ߽[MуS)i4mSXj! -&R?Xd1LY XKn4.ySHĆ}r2:Ѹ]QIU~bCMJǷ5"X`} Yhñp0fdFhD&zKK\,O5ʶ]//TuiH:y+Q%({4UETN֮/.yo98gy`Ĵ T\ϋEn"Cل^Ry1`VЋ(5b JspO B ߤS44T DDPEX& By(N"ණ0$!&1i B"2U <70IRX0֚Qh!g\10cxiPa% 1ȹVKMNT#*F KTGc\i&p{Q05Md <kGM`gdz-֫W7ͤOーͤb/Q&qJ r(a$8!\NXʃLs^.nf~HS¼.?|[JҐ[&{`2&/`@^3HXL8k sQL;9@6R"z1bt=}<%wjZ1# aaTH)lxſ cs;0l=s `x)P(|?TG)Lİ%):dH#$M@B<* juIr9G;؛Z0QJtb 4~RH(D"AXHPT&:JsKZ^K9R4(m ೔铒2BTőyy⓾ndoRX~([Z0H!~ R̖rKrq遥>0"-gx CgӇWc0fJ(|ç8):/seWFbwnW4|wws V VĤx;}cnP.` 8AtTiޝ@&;^D>Y7v9ɠ %%)p(hHGچa6ڌ!V?'Qɲ$R};lcEXae=ܻ"B|ڟb}(iZ)h|tfI(%s<GGץҲ3=w>JugϓUhr{ dSIh\ըmոC>/nO`jI.+4z?~tQ&9EKc @R߂pV%*_~ "rRhVk$lB+(:xs&z&Px?vwl.ie6@p;jxw +"BȰrPV V=?UHڠ{f><#2(8Qx*&3 ⫤PU r}tJ\{uRtJmqᠭӿ;K0Diҡq 2d/aΨ˪)J^2kDfgmƍAv3Pb(mjt_;N )'*W#3lFssz[҈N/Gna\n0ăGf-.U(.,;KD]KNo4_i˸y1K_ľa[;/~|1#F ROx~~`L؀/6S?ef6& + qL-gJ~c8YtwեvM7 ;Vmg:p 3 Y)JdOK]cwksc>-F+Sy8(8B9^ ɵ^>ɵGDϧnAR..6! h]Yv+eoXzk3`=Hܮ484JT@TD sm6Jn->2b +H00ު#%oqU8tHI^6RY}FB#RX+[*+$ü>eo_"jܦ>LTO|G/_X8C}ME]}J߾#!fۖʣ5ڒ@o;U2s ~>w~/PS_7R$6ͣ_q4Fgp p{I!B;&|D`$[LaЮڼ-j3)y2i3uN.y*>ɿL3/IU&<D}믢"nfGn8sųq&ďϛ_qV,ϊÌ NK1jfyϰZ^Z,v>XıSS3ve`LEH` sI4`ZR[-=Ir5Z!RZ):ѻG["ս]׽\>*J~=Zr9^6CT"S]Z0ʪIQ ;[^U/m!׼M&E/mo`A.YTѢ2]gFh҄=TE' B*]znFM V|^]ʙ3DX9` WӢ/^>*'0*C H>Z<5 JRk"VN|?& ` ӊ-bBWvg5G3a5Mds11x#ŧ9mgSb"Vs$P1GtJ\0 H2aܤIjILN v,7 \) 2~ûر[h(:t0LB@.u $% E$!<5 lfQhDŒuDE-$J)Dk##FZqك` < +O7'yCr_߯! ^5xLuZY@+;$ԠKzB`&WraLM=~VTt_ ~[FS=,Q(RnVӋ@jGc8Gs.^}xw"lϰ hĎJs0q"MKSy,J؄iUDH6SOu, ~^Oζ\:!x^+|v+Kd,1Rф))i lp6"ka4 aׄiU栵+*QO'STRY8TA,pڐECYؕ/bّ<4td~?q:ػA9+ C6q!RGWp;--kY48شt&NPp}UF>܅bCպNm!#>:xref|ᆴ,?}]AG;^顐ixt9M%ZsQGmpp'i_Nt̥D"x=ڎ\u%ŸFzȓ9(}CGx|Јʡxc +:ny(òudh7V#AhxS QM>3c :-| D`gI21+y BdGa,ԊB%C[\<R<Ѓ8Kţ`,Ɂ +);>Ŗ^o# z|EZlu:dC)n=~KM9}{kQ`G\8`Tf L 4R 1>VSCf%TLRU$&xꇳԫc?7ӫ%AA6M>*0_b R4fOn 7xR!MoF0q"y.XFe$18 )jtYɩc*Ed_sr6rӛ|e(E#n>b@l6`o;Gߕ^thG|gn\VT!$ø-̚qV߽4īJ*MʪRyIء9MA@)- )ʹ\!`ts@\b/EQ/s5{JuNZ,{s̉W0'^2'+L2lne"3 kΘ1.c4U$sVwZ^v]QMr#Ô0,N zW[. 8 @/hN qROe&6N,Ko02Fn2z, SE=Muҟ٠X8¢ ekTRz*&O`jgQ-[;)OX`JPXW·V fߚ;z ,򢻕?\b kI(fhגb *=a4cGb~̏%saNOh нR N./R"|4k/^C[aMF匤0~|_-Q $EWR|Oay j_?uolzws(&vxOwW dqb b ,14nZ5DW^(?>? mI?\_+dFZCa[B " \Dc-O 2;&4WY'ߥ}Š{s|)֣݁0XR'5BT9`F@FX R2J(& ollvxLVC+e*xCNH ,*NQ9>w1S?;FJXj Cd mR(1@X0^ͼaMyԹXyޑz w^AE£S5=y xuݿX sci^j_h2~>X9[S0"COfd-T \ r z4a A+nӺ\t*v]-̩&l󄭶9V[cՃma6a+y\dFr2?W "<:CZ?{vooHea5ç/֞*@Hǿj!MA{HoUy&˗n)w4n٪Sdzi#z)cʀh S}-_vOZDZUTN\\خ`lT/h3؞|ЋA*xSƙΓek"U/ZCUaqH{mꌘe dKz2\=]/i8|wս`?>75'kEO1[Kc'[':gkBO%w}4=%6gm~ ,%:4(TʓsٷdpKg$cE{>9RtM"rˍG'q[)s"NL+cgGupdR$_ھSGǚ PPcyNZŅǥ{GQNm ?]p*&T9W`Z_zHZ1dӲ/Gg`^ޱA['TY )|2Gz3qYPC+"HL,cq(;) ʘt–'4NM-.A%TK⌫5 7REԥeN9u>/*iŽ_LYe h!"]XcG@cx ͐$N&V!F\_j鋊T6> 0KHpn(DaJW8 yȸQ;p) ;C\AD‡[;pfGmc rϷXuϚB-We<d5dM͵^xV`E>_;޼ޛ?j:>6I鳋.xb_P\/xZeǫ#03ܓV-L+N[`v2c6INvIaH'XI'%}q.Lsi>Q+]_tfd)EĎw)!)ĺ" <;𫇡¾0Q,Gvhx1<]qhiIYpA!CKQBy0ILT -fGAHn( Qg `^ADh H3XPsۂb_lxIR0}7N`)$o&Hcih{;0.N[ 0o a&8l-tNq3hB6 7G5p0$~٦ c0_ 1T  Z1+UDg>g2esaD8  7 Juxi7]5Ntfghy=QFCuQh!{3U:` 0.!sc0:R" ܹ_?So7QACYE7Q@{6ل<6?`2Kfi~CXrN:fUj ?֬jpX!kFJ!HZ*:Vk?a l8h¤Dy^lBBi'%\RoIMf^1j(EB1 TE-V`2?Rf X8MƏ韃չx0 u+\<*U\z;/ `A1V+ Y iLqd6XAAP Wu!F%C=I|Y'3R3eN@j(ȯB[g/O#Pk!L!G8[tB@(#PPa^[K5c@鐼apJ(a% xKE#m9Z9ړ3F0v9i uFa,L'I՜~JsjbQ.! RjE.VR^Џ*ff|l:SaO6ɗŪx:2YU?}-qڕ/Gw_Td.7yxW40@,8W>ƛt4iߤ *HE'G[*nhBf{^Y!ox[Zhp +jSNY=f@ WGmF-jsusgc|ߔ r.$Gi؋f~ `N+R2˜p7Ԕix^~Sj̭3[:l r,[(piO6FR_X?˨&dOuF]bF!Ot+*!vI)B+Q ,&&qqO)o&⸀R厭˼iu3tfw"o;^93H.9v?RU&lU9Ր/pCū$4xY((6b'wYfj:eMm"LijjqqŰN9sD W3O%b!PD!pJdXRvC6YTN2Lȑ!2P--@H% `@B 5hE;_#0##~1s;;ޱ cξ,}t2]ag};-_;Ev_-KjoG!̛@ kHr*l˵~>$O]Vܴv{YRJrkw,2ge5\y &%9!= (g'I/$ݍ\K)zKm/!ޮ~i^OU;-)/y,$if TS2`5TrE* r~0+K7qK S  d2R\%:U \D3E_3@% /'02y盈lQMyO|8`۠ ^;n#? \Bp 9IIHwvr݇pa׻$_jnuXnAzTgvꌦb@,ûM9vы 3\}e7?YQ.[gɰǍ:'p/1YBm{I]W Cr <- Wvsz^1[H]%uaBǟc4#`7/}mߩ٣c=pd96ULh. VQQ+a(酠 IPBRKHE6Q;qNjd*zUFƯjxB9C`ϖHĠa| }%a:=wT@ Naפ$ OaѝԢ@1Eс0NJ`m@;-E vk^*xxF֊ hByloƠ0/8jmFE*_a`F6> WjE[Af߽ y_\_?I}x/~s#y`L5)ZYj^2SD[G6UA %dQj/j}߀.3xͼyL A@]JG PQQ,Nbu;j`X,B)9B&xui4e/* f~_ E@?֭ÖMU{><`+*(c\n2ha8v%szRT6|-VJpJgFx#t9FuLC@ifRt'"RaAysC*g *h(O--4A`ᢔ6첤/o.N$%.gR+˹!f1UqN)iRyOANGuZz.DJ`m0k*0G\E ~j)Vj@9#\P]e;Ww1Ţ`ހ`ɂq{:,¸ uI0pVfƠKۤ$B`LF=>KN4 HL 3-FDYT2 &tp^*i4+#% 6TEňG eTcW#PHM4u kɉw앹;b@S^Zo5ygOmqenv82&}U,~O'уUxQ~9uM^ Qr28ĥu# xi?/,"mj$TK6G/vCĝPC+Iڍ B(Je#^XF_ŭd+<|w5~w;Y~s<8sWշfͫ_r^m^}OɒB6Κļ;N(ex1V; fɧ?ygj?-Y]c,_:Nk.k4L 8osA#-W>^M-h1G) hС(ICNYS1ƣ6VHNHcH+9}0xlRֈOȞK#X?Ur /Ivw7 +Ǹr-0;Y0RryRrç*%Wo}DvOWIn)=6\{6IzxjD2\zOdtԾ G ni2GO11NFW! 8w'KԳ-Ycd_ XbR,*Jf@6%Hzm!7ru|/Ӏ[cI'ŭnN'L` 5{A[dq/] LzKLΡT$JjMV@ ovTpyHРGv)5jHPZC2YD1Bڜ&#^\_ BwRn9]tFs]CRMԌN)Dߑ&1iG,Rd1rL;-(A;$= k0P(/e fIݹTw) LL3 8g|OFŸx#'Hixxm1`s*gL?>"^yd+(&%3XJj V`-TnsA˧ρcJ(:.U}Z<)2f=ó{϶ROP\w`Mw7:{kO5r>W ,}w@eRہջm:/QC>ttUNztWXZ!džr߱kO;doǶD6ű_RTieֆ/ |jw}vxjp" ;HAXcJA$BI 0j9(rvqo֝@Y ( HkN8ZšCJi()`Nw8xK~ ly*,Q%CD)Yr[riD" 6 qZ5V ,Jk KVHg7 aN[aw4B bwXpf+ + edFb\v0Ӓ{Z6)I?y Ƃ _,tOQkjt<Ndzelk[i-f\/u0(Zu# 띐0dX} C=jqkg㼋'%=nD̟'ӫNۙ~|U WW~Gh@~[f-s7!TI>mqŒ_l&m|stJ E^,ρ?*ZG,%6a Q(Dy qe׏6xgP/ƺ:zB*ɇQdWQ[߽KnWOiܞߧy-/tF@sKr%%Ihxt!׈€zJٓn&B6yIʛ8౤- yMXSL 9t2qdۋYUB)pxǤ!S*鑨="Ym.U˲S?eҨ;ah U|LǪ`m tp_8GJam^U=FbO[[1?ɭ'ëG&ăFU^yl O/46~+H? B}+F8zՅO|~z=8@b9]$.cYʹrDiqh4k$ ?S߱Gjyxp.bhF Tϝ#9 {ـTشb2bz9| }l9%:a:Ot~5C8&e?%#vlMߛ (j ^ez+84CkO8ɽB۝{9rtǓi S yVeY)`!TPļ!qD 5jRyP O=NbkaEKlScΤu2 `BRDRzBoEf*sO^'w.KN|b qeٹHqVJT&u<*N>e.&/b#kj@b[+cl ߾};}#&\uYV7}Xe."dR̓|7쌰.6ses=!ZXfҟ_j"f߽a_\_U%Ɲ!x͕Ai hoZ.o.k8gG`=D!GeJ3Y!k7"N b@Y$rXepb)+69l${4C':)uAuSVk82{f8Ec&t./AiN! SZ1%sx"keY,ͥn,nD9;§6URcVquY0pw3Ghw3iLN=Lq4 edd*trTyrV%ev(QR>/!.\Ui]\!w>~| ׭n?\ٺw5|_bh;Uz}K}lŚTzC{1D{ꗊ?(W2&D;Gמj.0{w@5٤#y ^;y/xp}{P$yBB"%Sthڭ^qD햋A侣v/0uԷvoݺ?fT~^v|[.)&mS)nZڭ hLi4y@IZPQb":h<#s|8햟Tݺ?fɔ-3,匦ӕ[sqd/?JPLnE~]M~~wZ' ݭ &Och֟b~"?v9]AF F +aL19T.Z֔:$ErremP21 'R][s8+*ٕU~M2SS{$5ɔ $XDzekiPEI $%'ڝJb_7F_ZgJD Tt{+p`۰bw :TK bBE><_7l+*s2L:x v ?}^(LkXUvv\OqnYG@Brl_%g~iת9X-g`m~6ב-cloBKs8g|7 ҽwtyanô;yDp0g=]F4CL3+0go7ܲ\t)>e%MK塥X-dBষij/yBRކN R+%?}s_>LE$Uarxq{1mf-wfvӱI݅?=LF`.~6?ݢ ohP3OOl~ 4bO 靏2fָԩŞD#H1EHAAK;ͶL[lmKU.mnU<=*&q.:ӝĚ<= ܚC,XkХ^u}$|_ގFVTn x IoKg):ߙ C?*G̶(U:P[guJ8-z;[%J~PМ2r'7[$YbRH*uj" dHsmHf]m"WXPƝ%Ԡp )5$ETgFRm0(+='Bh¿rd"zCٔ%x3Ftjy:b24NtY"9C [.ŀXd[ȉ@P E.I&U搔6(AjIv1iO41$ # 9J|ȁ:$e)2f6\K8 Z`.QB)v1X@IdxzȚ_zMVg[ 1S夜o:}3qy(`=yc.{3Icє"f-ӄ;b1~v" =S^'RjrC2H'40$JR)S2mmXG:uI!'$ozuKݳbGaj~GpbM#¦$1T " %6!J29f I?@E)mt][4-?"XFm2j)Sp&BX4, Z8*[83 'V!"@Ljr>f~1%ʯηώ/{a Wd9ߐ7?|\a̼z$ODN/݌z?|:d6_Lx0U_~voCV)|uO aq x|H 8r_dcyªǓ%Tsy6xGx+) qS&p Шɹw5uKz4žԠӵmW?6Z79'u?5}ZhcNhOkT"\bփEŰ]Qk1]f\'[BcAK6 <| >e2spӝi4TGqp 7˧;_螣#`+XŎXhx5Nϻmpr;=D\ ] 6}\ RAK2vT\uU+ҷ9/e o ESCßKTb8zw\M_CP0֪GSC''1ǁtV+A)̀Er>Zv#wKT>kw^BkKJq¹d*1#t8jܶxfWo kvd,;Q*[ѴKڒ-EU->Njsj͜);*Mr6G%zhM٢M8o(1 Q|\kMU hP 8+AOi5xΆUmPx)o캗S[`ttH\vQB( ] G@q׃`.{J^R { o8Y4 ' ¢ܕCsgh">,zxW8υ?Q_[0AKj8\i'5,h*G5/1Fȩi %S"7\7'н^a 'tbԌAܲh|*MkªEV=U;x \v23CAg8ө˸Ilτ͐H(q8Q|,OSR-DH K),|+B_f]H1V}'/Ÿ`Oׂm,Y xLQg31bьTi̬ٕ]G|HK{?"&_Rr'j18aȥDi)Æb*TP45, hŔ&)ϒ$5uژ5Hk ?/vΤ l?5NT`)2r! V} '0@ʃ Ǿ(wjAc';ٜ,H#OnZrd<\Ȅ2^K}s?K6y3W_|'ZZ7Oal+}CrS}ܲx濈}*sn<X<ϊe*7v}G4O47kdO [X,ܐ|"#StMtvSr-щ}Gvi)_mMhvkBBp)ɟH6XNAb":﨣:[<%O-Pք|"!S4P h9/=lz;C{6fIY\/>x\ތ ݇/9""KUuH\fH+ *-fIR?#ibSDqHipR"d5M}R Hu)`]yڪ#8J#[|=.ߩl#;aBrA*]=j"1ZB F/qT%r@90VR=17AjDDok@ Bun= $CPn$H] (U{YQ?$r!ts3] \gylwzHk .pU^|i3标Ch*Gn=0=r+̪g;ge$cFH)׈G;af!#Z /.c.8iEħBtb [X, V18W[EE3e:=Y/RpaPS6#Lg eєf,#lRghi 8ZUcof-zگ[7[^LQC^]K6;Dcp*X"eIK%=J+1_BH{>%*f1X3DRvt;c1PԜ+B9[s4iRV0t BRT'i1ƬkknVs*Nٲ+NN^Ś`,)JKRNm Q!FĎFhtR &3 TP QLanQ6hu8B0s4cZ7ȘXn6VJٮן!g~.;SȚlpHJDgHB*rg#qvQFݥ뀌2]]Fn+&y7J6lUbOaꮲ2dYNLs1CI$:cPE% I,E{`̐[X̊_ S$Dm.f5MtKg?ŠISpGI,6eS9%֟S`!|<h@<9 .0]up(ܝ`!{%/+z_>?h?R)~ɚj^QcōZ1(jf?24V4iђ_bB1Xմ/>jktq9%J"^s[4f UbKoq #,T{GN8+`#MM &:#Vy|}%Ti;'.E[`Z`F1Xt&[s֙}+ZxA|0%8\R)B)q u2VrB Zc,7E%5syz yuOGCc?&l+g9Ι6pδL[L [R(31՘JO#6Ʀ*CiQe)ϧ9o_ozRPp?ߋmMfGPZ-m6o7^Q<%Ys|V-֊ uR1p1"]1|Q,& S為Ub(|G TR$GV KeAY.N`m(Jcߒ(O+C2#Ό83b̈oFAn8MG-1E! I ə5XJ+ kIFT\i3P|6.ܒIhfRxo, 7RުzplGջO wۦJH+SF( J) yJBjsk BEweTs3VFA5GOA ٟId. &D晥0H3L`~ɔbZApU P 2| (_V k|Top Js4pe2Zp\㔃al$^&[L`H`U!@Pi& ,汅5Zel&δ DHK`F4@ը<p՛걥`ւ5/8f6Z't義LJ˚06ny|Z>N8~wN}X; 6"h^?z{nQ- exW=\DDhoW`OT}|`qsZLz3Q:8sDw7G y۽Dx3)/]´*B^ FEʳt ^Z=p>2 Rgt˾K`*.deh㮗2XQKJlK/ɏటt,8m!VE5q@/G" ܖP&?9MF,VbNt2u$@Ctv{su-\ÇQrSbAx_F}h}{8pjǜII#E) s}4J])9FNigָ\۰n>,묏 368v>9]]T .{t` xTNn4SǨ)QT[f+tSb$3'1_ّ,#[Rٹ^"IK,Dm_;| SXkG=D[V8}5ՁFS&4oUr(m= EuqBYm9,qrq?ۻov}s绛+b\p tQf-5`&Wܪ\H#fάF`D?5?:v2"Wc4s3M+]tsUC>_}}~}Kͭci}5÷𣷓tz?SCQ/z&EwhڛعbwX6Ff炿We:~a1KZO *N)0+DI댫4wBoBH[|@G"C}¨<+gZssJPy y@%䮻YbрMؗuzʸ{Fp$RHX9ښ3e}r C=ӳITvl !EHaMM (UKO}T&\LV]p-egHEYC"%ׯV޵VFKѣ(kT/ Ai㴉˜zH4EܢK'g$хm-T2yxRI_! Uv)E,EB zJ9fԞ6Z R[2ju<+KQ_M򄋒+ՈX\fD(k=K@b!R8!%Z*gK#:p^X giH=9Ikz_҅j#WƒR]h&puX!k8_"}O>Љ%%,QLwJJ4;Uw2ysX$0:~ܿNsPmN0͙ `3Ʊ҆Ř3ri$\(.q>b [yNr|Ong|Y"тqbIi$LBL-t,TJ V'Kb'v&'q6J&UaNQP)KPhPi""9lLt3uc|?C3_G_g46$E2"/0dl)bGtCTFKH"I8%+Vɏe`2&]yׂwg;61Rn?ჟ #0e߅77~<4^Ǯjcߡn`at%4sA8Bܯįq}noP ̫0)S$G\?e(8?MآlmmW7y':LE^sDkj)eHAmgH _p,-rEYx؆6~rD׷#mc]$p*BaJѡrG ;jQ(*"RTp,!LkY 4,L1SsablCQS1[S!g`;DJ']2"+ X LSRL-f)6zli!SX d%RvLZx4.e&'P=}3oi;=“1lQLUZ^6O4֦ؒLPY7[G0{Z}8M_"iRWbO7NnDjKީ9?=<|օg$!Hs%vJRYًnl˳6O>4M!HnQH44[!z(v[˽pnxZIޏ{|h64c:>GpBZG/GߖV\ǥ\hL :kHiz#P N ˲qA1*0fLVK[d,`R\CȊ8Cջ^KtQ?Z+fxL2G .<}߭t"7cDZ gyxx`^ci3X&'HԛW&9;脷dA#L/y$'i]%[|wP@HɎ+QxiDڙ,QAmYOn7HNEݧlV@l2H팤"*YêVQw3 MɼU/{Jt66ѩ*ThDVSZhUID Of|8?ǠkLL)GMퟃkd\eZJqkͽ3?Qbq`e#X3 ?4a&V& f2:}ڋ+y\ĕUҪ9;߃y5(փJPMU::P_%9 .4iJ1"ŷWKG)O&ODŽXx*׉V/xvPAE%%MUԺ{&.KJ]-/ !1K\m[HK^ s=zGbe/  suzJ&f˦@zZϋsj+G wm_!e7C<[x@ ˗Lud[=sp,qS꺺ZXa ^0 KJ)&%YaϜS0)YD$r"t6q`Djl8"YNvךݦ465ɪv;ꄶK:1P3^5FcTgq}ߺ3+Zc_ni 5o~h?|pZм#Yj1%j_Ub!9nhܛnjAA[xBD *"2NTq 1Sg$xod~X A&>CO_USnX1z xuR36u/E1ճ8 CLxG$t[Em#zɍ6=W_PX)MXQrݐbcyDOsqwGctƝa\ @n""9ҡFD1PNc-yI=Uٸ!љFlDpK e3}BQ +ȹ BR1ۊeQ덡Z9dRčR,E>J"2)= DM7k u^a,e Y*}Jf DxVx:B,!mu,ƚ#BO/IqO0?R('k?.× 4MG^&0\6l<|' | *h_.}"`|9c`CaPbFkg a gb ?|@^.S`0(U}(=F 1P<h^zitm+`QVKNtȹ Ll,\abaUP< ~S%b55LjY7%9*4Ӗ 3v8ߕ&RoT;TBQ [2a`s)@z+­ՙp51Xi8 | ˔$R;i»pl>bl ""A4yk:?x7%3͗<϶a`-e?`XJr!x!U4NO‚Gq1D "\NfTG&He'0Ƞku0Ỻ@Q^sxnsRgO7>1i=M **濚/JR+DCKϢ$3r_ck,*YĤ3W V ZMJ_ ﷘QI ^KKքÝ: e^cL3(M!duV!\ Gy f9w6!Tc{_V cBiq".iNdiRGCԻЎ9!=%RKZVL^( rZ1o/*sh q@E(mШz K-Wzg8WɍR'"(qO\dr J\(X FA5Hq}z9xNk.OMѰJg=%4/ZFد54SI1n$Sneؤ,O@&3EVTj :kp{$=TσA!eOi ȹcpl:yA@#RX `}ۼIPo[ ?8-h6^"Q'%KSx) 9ØpBO~{'-MV5ZUQN<,_)봕ib`\v"(j(VEr:i=(-x qcH<N-A&4{x|(lі^+S.1ۻ y%]ZE5p  aoh8F(26&qRc2Di/dkr>aV:.+8Bj#_irf47{Q, =j/fF<Cc@$''SGuC s$Jqb,Vvݔ]c8oJgŌ {"_fjJSHD`)r9uz(|iȋڐ-R4vݲPeݚv~3uylDu_j}].[&JISՠ.wah!`VCAѪ-co>]w`b- m׉u9}dh17}|Ž[}UCa>2N_/}Ɏw5Jt?m&^Is1nu:rjwn-%cO;^ύmi/Gzyy爦RsW`\_PUJLtW [?, R5c{!=>ˡ,YC*"פ!􇲲M9iܡlL4هX&0T0Vk.̙3a*NiP+5Ā|kV?Z-]"ىH ЍCV6|j'5?Ux+Ԕ8S0ƺlP9yT&}(*;8-j?74j~_uvځ=]x:&8.t^|p7aR%kvpޟlu%5#>I".[ eOwT"C kKkY8$Y͂1dw` sF&p8 32C>a;Rd{.,4u_xeP`x8v1>TkO50LpR=lm생R(J8gT讴& h9늋xnȴB P9Fɦq Zxsqu{=} !b \Yȸ ȋ!pG |ž-r-:{M 仢n 71 ; ч;"y%8/ڼΣB>XqZ}C\mÃ=ugC+췻AC 7vGL SWHQM05%(gRҦNl0V_#WW)F 0u :uGor5]T jvQ9)kAe80}p' T{ޠEv`❩;8PbcבA# UI9n~cpA"OuKUAG>-sA©*xGFϡ>~9PA*:m6G *Zr^hޥ]0?RrsVj{S4r_|eU&4>\6l4UРWA^ zUԠEЊ9Dc%ҌA3IGp-Gb! q9q@VNhJQk90'mAI.+?+ IɄ^[;Eo/r.~=_?~|>~$Q}b\UV@N0Ene_s\ 6 xR1-褭 _D ~^KgH!i8q816sdJdR (b5 HK,:%?w*PfqJv_~{YE)AƠW{U:i:̹LYKSũQ,#$ ͍fAPn% `3E}QjjRI-Lc@ M  I-pKRB{/ .x>GSS$N⩕;ƥej{6_!ޞVʾ-szIϗ\:%C/i̒DKEjIrq,Rgfgg3,IldR X \XPܥ;ylͧ0syӵ8Xċ,U N7E_7O\>AN~M2!NmzG}Do >ۃa:/n1QBJoA@j,xd 4\Q066a-La,7 &;ŗnG@ )L2O VJPy;#4^4M@K'X "$,#[aF)ǒSasD!)'щg"D" @4JhJPbE'! ?sN`E \(a1Π01h t1[ ?& {MmVBne9 Ou[݈֗f:O.޼6KgvIЊ:;@@\ c=ӬE_#޸Ǹ)M ,?0N,?ˏqD) &)HlGjE`b FY~9xS~([ޒ"W[;nyU*.ҿ֞P"VS0Ꮼ,?O0C~D;sFY/Id+e<IG_^ ;M켨ڏ$$^PTQr^Ay1'D)`\fx?\fID@6xYDZƐ>J2wAXY]DZGĔ6 Nbfwr81'Xk(3o3hm81.N},nXKXeޮSePq@vs8 \"/ Oq]AASz A.-LSI 6=^ y?'[텑`D*j63gI\/aٮ{8jQG(q@kaP<\|QHTX笌J^S͑r SU^oҧ ^']h<`N'ϪS[~RYڽ+?o}`bfc͠!Lu!~E /Yo@d% Y/֬H۝ur|{2PՑ6UȆ-([JF,_&][3 q*?>mtL"S{+nyszjf>ue4\{-EuYRaI{\/dMٜ2d 3bɲaڳfOW5~r7 7x<)t?ygLhܮ VnR;eʊ'g'd''LzU{XkVfZ5IW.^2_CRngTnGt^%[򕋨L uj2N}׸" ? =ƕ@G!\21tƇn!8ݏh44T :B*$ 7O#M )5Q3IbPU=(̘^gTcR8nq 7K@󉺅:C=7W_>Av<㴘ZTL+sTFFj]E/uu\nsp c(W+ʄǼvi.ZTO/DZZR (uw ״=ȗp#['wn4(\pȆ.x &O7ELĚeFx|ḯଃ s L= s^7KFM {xMpCMsك+^ƀ?k}웥4 :FM*W;1iɗ kQ%ԃ[;XܞꬻU׳(ZRUfzz 9HY1O6?}z$b@(Qh~_"XV36jʜeuݬ?X9nj$X-!2(c0DYK MG8*16Y#2kc𮷨YveQ䚅et{ ff^z*j;91Zˀ{; g?P%7I63U$ jݖ[!xY8t ]?٧a2oܕ8.qb}RQtU|-U޼@:CW| T "vwd2FYAQB 6!HeU()Rƀ*JLUhqVmb r1vy [ȩȩq[3)8]Q(0F0[Pr51Xnʢ BDX=IPCufkS`<q!3+bQܹr7vkw@ˤA Sۗ4jB5zݐmo8w`!"n޿f8͗Bn1 UO]Ӳ/_1, F+);i[LaN(+);ǢAYDo .)8z@B"HP>3U_qɹ!T!TKbdB85V;OoIJmwĂ* Wƣ WD.GR 0on0';ͅ9Lf" a[ Pqo8:I؆ʴfBLA&hϝ88`Ojd 3JjJROӨ+  eԯʼ$%NsJs6T5q_]cU@shقz~Mz ߇6z̀TT+Q%gS }㍡PhƆߣtl*T3ZܔdCGvʎ=ە>H+CF(#%a09cBE~͓$alV{Lp'vz{6-9w:\i&v^UDj͓Z@ '( ,֤Pdqm7l=]; G>=Wv3"bVebJj (1!^ %!RgET\3rf2ݥs"lj1T+K"LESA1I$kmI`Y jPԊ.| Meɦ LLÒ(IHA6,&LbCN1X KX,T$i>wψIe0<摦\#9d牰P8!$ D׆;=+B°Ӓ'ǂ_ϭ ^Hڐ3l]=dJϓ񖡿}`;ޢt6oۄ FJ s\霉ܲ7[w*hOtxjJV> VMͩ*a*+qM_(KswL,{~璯#D/{i%I'B+,FP@{ph\YT6q`5Cu8U.}溰·C6^;]L!},\Ueۿ 1hq#2{{y@Y勯K\ l jHJ!ՋJjӆRe1͡]K9Zt[$d( DZ5g-b>jۃwg,s;z(3Xm81./RmH,@fϼ}&*\"[2ZoOF8|nj{?_F^ lJ`SO؝4 jmjAWw| r vCdWN]ѺȮ}ŔgW vx:j*$FMXNy+u+&%9\RRON>]oeO~IDTxwp uJ6ugEBRK+7daͭs ?|o۸}ϰ1C{#ˍ&APypXyhfj79{ ?VA\8b@C7puj# Ʒomߦ- *Ĉ-%J|KIAk L'J p]`.[B.QNM-Y ='@ n&Q +ItUgd ]㉝?yg]$=)؊e9e$rrs9VZ[zo2+^85IW.>2En-FBRuAbPFt|QE/ l|hXi_J.A5HnV_rJKcؾ_ aL|y7hXq?f43p?15H0T/ʕ 6->TT9q@ik)&9 niXRS;=^PQ -6ncbݹ2| :"(jƘ$ZvKʎ-S(mco䠟c&ox"E&SO#sY%E*uG1|uQp7O>t,b'?LLRǔ_]~4_lXNړWVHS!/< LkP3ʬ 9r Eey4[ҋfqwjl x!f9q a*~=n&h [ZlW& CZ(ZWcRN[}A'ݺ~y< D+#64_mj1RM2(=X)$(D)`*ϯ jvrJѡ&^)-Cu?i䊃!.1mB#=˦@z|{i*\ |toݲZ.c-oP|O4$DZB<;B ooi|ZXIzx_eJX,y0Ty2Noُ渪G)gmb+-,E/{_rA+tL]&itouI*ŨKr{Ɖ!CBA5$o0KP8,Vӽ߄)I ud8#U!Pi ^=^ŗb,;'֞l'1ᶴݍhլ/2$`UkV3; 'S;Aֵ7wހ&a2{]BٽC/htAbjVv鸶v[:w\Ru\EAAe:wNCAglġNbMy%Xɢx2_Bъo4ns 4{ߞN"0(l˓H#;F!+V74S2}9ph/𖰘Ȇx ({ZDA` DRݹ{ [{kycOhZJ;IYZ U\Q:s)@+B7_uzxE@!FtZ$W|q⊍&Novk5~ܑwq8Uqo~U{֗/t5[ -C R˔&)NujR*(q,'\0L˜7\!`qvW/ݕ(.V"{&xvl J.wc^DΙ뙥q"sƩ@8v/H'lXgVa(ೇjFƭ=Ӝʫӥ~ŒC,~aBa{q"Cb=pј|0 /Nc,k^/z%hgji.IJP8d *IȎcP% D!A;N`7]_$(&:&8]:1EP8I'@o7R &f`KaOt ~`']6 Ud80p'xb|5xܵgdz0uIce >,o{.qA6撻]'ciwTtu8Nn&nO{i_WitfGZ2'rO=o?G4~7߹os홫;d*գ!/\E+T_W7,jѺUGuQƺx`9e1V=Ӻա!/\E+<Ѻ$'BDԚqd6Кh!4pkZ"H RK`W.sJvG.OS2֭=oq*[Vnuh Wu cx ǯv#bVP ň﮿f8Ƴ6WC8ճX7EŪqn1/x}MI7.U ^..%,JϿޖBbY)(_@% BYP2w&Iy!9Z?8Wɝ ͵nzZ?pEYFWmix:vXg[k7ss ޼$*Fô>l":4.TdlA@л18Ǧ VR(+6PP~T(.x)yZ5ՋJ2%@ia2 m\M.#M^l&afWzo܊3X kI90g<9R I p~40ن+[2yVIOv9!`Ƒ^ ,%/{~~+ݞ?Ggw". 5^z2AK2DTCgXԑ`Ws\6;;E=9?"-;y=H3Ƙ6轭/.D0{~~ > @<^Pv@_@13¿H? }:,}hc ~'>XktnRG@ޞ,mS⽙~xܝY=qg80@֬={.n~uzZ]sҗj`硎)g!ة} xrY5MAs̺t8$BF~ Z]KA1}>KJ1u[bN(k)m[?K%Ű+˹ ʨ}תߡ-fL ja; ϨBHm?OۛU9b!23T Ԯc)щ&F$k$Iel$Hq#Vj^fshW3> 2z| 7+w/{j#9:oWYDt^De]y7T=U`aseXbq{͏.1R|=qx9hksX]=8Y==, =?kv M<煓M" qdB??!xrgKOGkd}ܒh75("CȚ@6,ҾG[^~C-?hucë`}ΰ]@Cz0_ā{2XJ]6Sw"t[/ AyQO_fS379%۲ g4~u1xՋ?>Y$~kӍ[C.QQp hUr/;n̝RrGD1"rˏO͓-C)Hi8H[54[T,v5~v<<{;hEF88JFʅϞy%n쵌$`mi;5l?X̫ʮHPTH ZVwa Ğ@Xـ'!x7<,^V"#?qQR~ZDhs{4}cU 3gf  2z=K#ÛGئ: $% &q l!M1p@`bbhq*U\I$N KX1ިu}nV *:ƔР-Êbf&tS-ę2PkFJ,:URJmkT< +]_POF؎|V37/}2Ӛ4䅫:8{ͻRY6]oHWP#@K^ >.eH߯DJM5IQ -z߁j3(.䨠ҚH⇷*6>!;cRIW'59]Stٟ㏅gfGՓ?}&';$- 'WCgS$S00Eh 78*/I##'Mc[7d `:>o;XGo/JQL#ŎތD5Wjͮ8r#mn?vӾD3YHӪ"r`]Qk墑Ic9\!EnDRڛ Mjy\o*e{)kh4 4/]biO1B{T[!0)m&dJ_Lm#J5 wpS-W}~k+ Af>xL@+tX|NFX] Ym/ #!X"+Mr4! 6bTm9p9Ll?[%%c*0Qz&.ou~l4y5̜ѤK.st ;f/J2ϵN(%]x]%ݳ'cPsy JJUTn2oj`l0u)X QcPjb΄‘ Kɺ<b;ɬ˓Y9j,]C)c+)RRm, vc".nh,5ۃr6VFԄm -j,\$yA7P6nɩI6Qs̝  ȷ5eu0O_cLaS8$6JSʒhDRg_vIZj{69QF딣n6Q!7Uuɵ[IYnGAA7#x{3+<n+ Y*XkR1Urelk(+"tյn4K0煍jv^9Z CWm!(g=P ŻբʎLQRiWN{bKN"F-1#;"bg): InCYͤ`j8r{%eƀ|)3k- UVNcbB8T(-t] t݆KbI@ 1 T}DZ*ڃ\&TIB1",YZ iaRZ3QD+EAu3BMRcw-apct䰉ԂOCXnR:5qe-u aJT3aN_e$U(8mzS,-Ltz{(3k7'˃Ye]G_$U` ]6Vּ]:]vQ0u4A\ UB"MEMcƪBZӣ-lMx04Mi7lء{nw3Ϩ]fv5dYskքs90_{B -Z??$Шk®eNRc#O"IXQwcEݍu7[,B8\s &981aROHƥ1y,Qؤ1E|,H j6)ZlB_jj@ldEV탖f/7ɂ_~4WsNZQ2b\(J,'L eД,u&VYQ.YK^:ê$zg-L৫Slݧ;0m7a"iA7xqkynr{C|๙eD_ϋv/?R8q3"fM&i$=޵گߦ?ILϨ&]rv< cLQcA@jDs+*uLi#JzNqK*t% Y1S0a$#ۗ9Pm31!ՙ=l6k!xq >E>Jէo82? pX䦃2E.I.YlSx$؉wYș(\Ê <5lkTb"`$HQӔ)41q YB0kt\>d1FE!ZDY2u(/>Gl^'݂ .:gI>I{#g Xr$M3)ySiX"EJbDMشB9iv%JW)k'D~4 xG }{|x_`OWхh z+H۹f׍h! Nmz|0MjyzFWbfgM ,-J v23$EϥdՖwzU`#'lmW/nf"2H:>}TD(0:5uewZi434@ȩ`ci'u__. AR`ҒV;[g/D?Glk 3>LAg.?Q9xOP))&aONi3*R$(j]f;0aAia^63Yէ@pc9!$QֿU_d䅒LWΉwSb>JxT1I}z<8 7DlNqu6r(>.ZrAMY|dKBOoI8HN6$;@L%NKCf/`q sE۵&B5Ч}`yǐ!3ȧϝp3U\c@-~~4y}ׯg?URKQk~wj_}jid{ťP,2{eٽW۫Y>;ѿWS cb{n}SvL)9AbP2oQKʇ{b,&@Ar)ROG'&œ k/{h~D[ojiD:{+bḜa%&_ovZt8'h 0N4^ gw"ruq#,PThf8A=Cgz0'g [0߅F6S&&Ĥ("qLh"yE %,qi@8:XpJLȬs5H'Hb*^\IXʶx>ag˒ Fk^lxBӗbĩw/iC#S|%]=nK;uX )(8)Zy&2#bLSLР<FE&5*(*<pr&Q1xu/ 8 (c`#:ɰ*!wˍdgVG:ؖ҆qݵUiI; ^_vx[9h@9-@׼yǠ!%Le<}H+>,/xwREv0g]}b 0nܥotU2֚%~7U-QfmQ* Ż z. sb2lB<-oz_֢Bom+G y$}iN:?=;*Jsnt{ GVPs j7Nw37̽3˧{i ؀k!sG:?k&)KU~|dpAEn&S>ng~ډOMo g\S"֑A_;FuE@G6zf9x C.IzmYruw|gVXL$s#ti&F:(Iz9(~\ $!TNEX@ n2H#.)`4B?v-DR)PFAYL@aǗ sͿ4UdMbCYbYI `-h!np DXb4kL9ޟӬ-\SV\jB0Ru>5ʅ {<DQ^ğ?4w!$u7B >)aՄX*9&%Nr;NrPӕm\-JweE)eWpG 39[!mdfFG;[H"Afe(N"` l?ѭ{[V%*~u+;x6S{X&ҔuEbis`[<>JBn?OEd%dpI(ٯY4hqM; [(>&t[k%S!8zv9Z6(o;5HywKag] *'p42Ͼ}Ghh y4!8惏ݕn¹8t uBD0Wkݵt /)oECxJ)un5 B`ko'X܊ODWfNb]|6|,wѫ% "/&TLJ"e0)AzNϰEta @ X $PE<=߼})dYēUJ΂hqD/*T?> 5s#no+Դ:z0TQT~:vI@] v:Ū.{`?8KX&I)nK Π u%M.[H_`Օr_A̻K~? }aי/bH$DE\#4˟rAOcD<|q;ÍFr ᾨAw 1pErbYhF3h/ ϯ2:B7%E|ћMh6-oױzpd;k6,wm#__]ϯzk[gջ/</q.WERN~'iYԖ ?W~5<`E80T+a"1FbL6"HeweqI42;;STއ>x%Y/6nh'mN&cB緁ogsHs<~<nFO/bdfgϳYNg6wp.<6xIfGdfk8%3ؚ/BfgiΖ?ĈCZ݈ /k%a6>2H،0cCDAFSz$Ѝ{8WO#-g(Ȫ~4R|fZ=ڧFPz$ƙC˚Z QbEeFpd:+Fqpݡ=4 R⽍x4;wx D{- (rPG>8PZѵDdfKt3zٹi$:Z?6-:GǶ7i)҉$36'= ܡad26"T#1Ӏ!'R4D_Z Z%flџwōfuESܴDfu<{{;OKx_:ŠuM|_03ۂo [Q'f&bdʎv+]u=/({|뒘h<(&Ţ(=rrܷ/T{ =9Q.IQƵā(\ Xad_IQ0 f$E`yHQpX'NQWQkc׫>ԧWE4KҗP֘L1CXSO G5A1_"R[#!gBQmo<hNI]s<}|mVTsqp*| 7 }f3ihw %&>N ,c֐r_=kk_ C0/3 5TM*tSaB`' ¼CKDNlh2RJas#<*yuĚ|  !H x,5fG !bc@ -WkiEiW ~8DIx_S-D3j,rTĩ>@5y@wOD;$F|sza.YKFw0I],/ w_LD6^6JV%-__w?!\g`F&|dO :=#+{Ѩ38H#iDaP\((ͧD6\.^Q +qZ}mJrGJBҍӰFEYʦj?Go&~ޝU3%ަBg[27#j͜9|͙ɰ8 <`2<&vTW*I i35sQ'*N2E4QFݏdF x<;$&cA;|%IL%9(OdO]zA2H'9%ne-?o%KpDDŽ9aWt]֫5x^MЫRNRbv5 Yרv5+6[ >^i> ԏ?>~>FB0#2`^׍aDCs5+EP5DP>J!9y M c؋8pZFZפ0?7LjRոX_1 !IaQӲ[5x9^/]a}"'|I]O&u=m nЂQk#鲂Fg9 ib;|9e J;χZoѿ._Aݢ.Rzf.Ƅm߹WߗKc-%fHlH5ؘT[Iv ȂF\{eu̲TkPtT6P"c=K$5VX‘Vj 5PPl`wFt4;;G )8&ie< (,whX e.uqւ`] ~s9/m9O8t Zӎ"S1F#${dp`ӵrt@Ix a`e9 AIqR3aX.AwG[TiG)M<6FIa(W0"*&}D5GyTst)n5ú ~`-G+$LSQj(6qAIF(uw4-_+>m  ߭nxBj"%L}=GR6KsZ9 6Xrb#W 5sn6i\^Dj/"ۋ?!1ؑУMCCoqPp@Ĕ47_Փ2%KzN|*qM}u1{9͙oixyǙcv94y&?q`u3KGM";9#"w CNqF WѢ(6n0urcwz姰|( ŵ$du5ł(ZXGu8F`CcBjvr7XR kl D:w)`<8!*wCO?0oJ%YU;=F\Exgc"5i\qzL1ָ:xlűPJ4<{f ,.!c7nN9#a:ZBFN Y6hKw[Pc jX~`AQ*9X/cXZ~ ?cxwQqQ'rta7?K>x2Q>~1NkU ~֋U}Dew}tIOA>x]=9Z%21aU,Q)DyH[q{yomZ0,˜y eyP3o.?s~m8a8fG&Oa K| AZ1bb⃀D09 ui*4;t^c85 m\J|me``[ǏtWaeQ#ﺭd|1g| 0QbrW3$ !Br1C"Dj3oS@.ww޾(|f̻'ئӻgAW¹A9v'2 a+VBXƺhu>Cj)NWvLql9Pr$! 'j2[:ű4BQ%B3 a\F#ۈ? )OYlhN$#Uz{y)w>P/ʃ'T RG3FZ8UZ bTjS?6~VRI#6ڗfQ5kCK-0A 9BWMIAϕs`Ϟ-WHWzjĴl^w8mԥ[gֈn t{Vְ ݜC3N*H-'+2ÙYlDu!"MDTnXM<@(x;Ws/Z-^>1+W+ȇ{=8)4 95~6 ~݌: ^Mw?ͷAɯj6-}#\i':l|eGve-b3Kqleh~c%/Kvq1GYb Kj&5eTtH bj\uw-tӝj{%DtE ۘܮ5;ϢrtNND,{^nZ6pz8di89\lK n5O D2>snp$P@˃5dbG̈́G`[zf N ]zZIgАƒUEDB@n-*Ŕ(5?cD9a1CD AUD8C !7cL3&b 4i Y("$) JC T NRTb5@sz/jZTxKjaE"l̲??WyR7yc(C Rm)ϛ{&=yحL3B~K3Ftma?dOlӥ#8wX4eqJ*_?kgƙB/'#Tzqs6 e(bxaN1bn/RqpűRQ h,s>TB՘Fyk*+S /@z)4vl%gr~6|685Psu0ӗRʁMt8NХwg&;7==mb`Bf.I.Ɔlq-yõFهhK26?CXS#5+ }?^1xesxizkC.e{B>bJ3unm4znqJ׿hbW^ M_y/ *sɓ70k={*,ȾQn9꧐\05 Д<':h>s{ xҸ*耂cDO08`!w}q[wr2 v稴wDGZf0G-H8GEu׶+ p$BKXQ7.畇˷ [] (S  l.r»b!CNjy~gL BC4h Z!VQ" q">?QcA,LCdO&Iv[yCF[݃zҗ߶n!ݩq5"Ad!OױD}v>D;zKZdzd1owlr9u~'ZC2n޺sn R;|qZZvHzErkH+j}`:z qy/ذD 2f*qa*&RdD 1  (FfR*8yjI) (Ńj!Tu 0Xy?L OS&`<_f垜xgr눡&Iprrr>Ո],~oS$338Bg&% B5r))ӖMb*s.zI{6+Z49 eݽ asZBDh EՒdz[ ۡ^cjbPP: 3[F, !D=LG8zx>D6',vgBH=~|C1}bxf{΀LTD߼z]?'f]{}'攕}"jtWvu=I򫰱t7^_-cl/lO skxe.({ܕ^S^|#wlVN>?`&M:j[V.skFBr)G拁QGw5nxJ yͿnvkBBr)nqECnu[!4G mk7]Gք|"#Sp<ӬF $Cnu['mk7zGք|"%Sˈ7Tc g43FWnfLW7A|>W5乷} Ry~B+B/Oz{^ '\FW(d?[pX !xy~\J ~|Y)ՋqɊab*Rf3 !aH a$2>Q0\sQ;=r_'6K/)\z sEwۉGjD^'8gTsbBIًwϓvbDP/ Ff$Թ:3MT}RxDԞE)/J@_jƅfH{H_cWcr;"}9 L@LOo\mcSד %S*),ukz3/x<;3X_j)jypV8HoHF_qYC!T6f Ei}]Pr/s BV`6aڨ7,*]p(ӧxis ٽ*l|ߙ͹%*Yf~z(c_Z$TP )  &2#RdŬ|l&U[`j$z1 l:Ld\p;aϤ'_ld,[gOm6 (>;T𼂾ة:#nnNx7.>L7BB0![#QI:B&z 4Jj\Glˆށ [;%|ccUYZU&4??wVjUjӃ1MBEPaiX`&n(Ui@̎G IQS$r4I ;._Y$Nuw`նskwo6ѧz?btx~# 8tKj#Ώ X=q;(BRܻ\fd$b>ՁA2]bV~T8 }kWS7gO=4V;Y#dK i0B)ClԃZð]jwPн'4zgvJ6daCԶ!U50Gv}׻Lylk(lwNueu;\L= TJH^L"#qA=":RlD8Mmҵ?ǫ| 6*Wٻ]CU,js"<0fVHH I#bwq NLޡ&&KV()0+Fac%*# Cg%0ё1"$E2AIju|.=41yQvgcgF#\\z(Y5?5$ AJB܄,mJ9!G4⼳衏oНchS9<%os;95v Oh*<GػGn$Wx, î^,0YFg벤1F%%UuJ*CJ2⋃d#"|l^À!S |OGIutR!0,J:N aNIutE;⡉t&K.u`)Ov):u_)s(9`^IsBL ׸rl`$a[XmV(rKdw8>.gSUhUQNG7MNYi|g~QյqT _EǺPL$ѷ?;l6y{IYH"0nltdhȆhM:*T zh:"=cmy 0KnxIgKrܡCA-?~Fsvҟi(!Oe?_nR 1-,Nӵ~oݲfm݌LW3o ޖߖw(B*rCAoVs7&0GJ3')/Yo0u;KnM@;M;lc|s4- -c~BZܳMXҸpx՚Q'fՖ1֡ed2Ʈ;GTaqa<U& vY9ӞZ*ågխ}ξVXc;-8u`b08z1DU IJ]D\kdJk] LC2X︰( s`og0cnO)Je/ٟqz>=>6NތcNjGJv%hIsȑ颷׷S=Kۏ?ĿVw!"xwjG4lQ]E|ng&a^=rF@MAdhKD Rv+o^͗(eΠ xw/yö P!]JL8T ctr,pjXI%VRpōpB!<Bm  *<8lTź܉ZѢVl^ԔTPX,y8#5@X#Њh1Ñ.ZF 5&}^ވMt5gF54x R{FE࢐p΃̔nIWV}S9馼l$D\#3MԈ `!3Y ۸g k3 :RGN*bNlEH&*a}v4 `dڸX/91(64PNph"hdF(܌slvOz>?~xrb|-ycrmѿ&7isvs7/Np2WB븛ԏ[*DUO7s'3Jl!DJaCsx'Rot??w%6/G@rLf% 2^6 !R .p8S"qgIv峣":$`t aIX?uV_GyDZMX` &wcAMX3 09@Fc:Ӑ[Kx<{pU#o#ߑfRG6Q!EӝCy/0OÃ0-+.|w}>]RcچdP!ՓzϝVY{Hy+,}~Ylx_^5 0ۖ$VftCϿ6cBAN<̻E+d56ًcV)Qm| rɾjժn5|t, U ժG}bg L7,;v(B.g+>٤At~ 0,-re\d7 8g%y[ 9bpAW>c*zs_F?{oymf#/h!b_+gs"~TJ"zD*U]DWo]4R{&p'3= %gd4E{##Dϻ hY'Lnݓި@w+#X\?MS-WŘ_tQ2g&)pzvI.-gحKˌ+(FLőSN#.q 1q#625~ƉGB A1RȦ؄ V}ݦ~.W4AB[.Wȿ\w_NşB?l@;&EŮ\pp.IC^: }uf[N,9TBҪ)b-J(E,`MzPغeoV/(7:o}ǤYu/bdF}'" 'M=q[FUPlЊ1C*ôSQT1vyXۀ Rۀ#U(@=eɺ{Xze> go?zpPe$8OeXj=n@7FH;>۰TDW VwղlvsY"pr Iq\Ľq 'um8F;B Õ;̈ Ӣ :+uTsy qIDȿ"1Q]jhVNU^ kU*mu6M*UKfd*Bw_̱ 3]tlt`nF[E eh+ s[)  &uٸ KbWRX'6/vC\abv;eG-QkθT9k Y) Lsn,/@ *52ApD` !@H: Y%,TgBVѱ\,jMHjώl)c:h88gQ`܇"c5 'RPJsc-DБF,zl,9c8f1 F!.5 +MPzyT%= ֱz{y 328cV #l1 _ d9crBr,<^J S餫D| G= 1r&m8 I?o9"A BN +q[yM`]ꞟpIC7WQQ2kjweI%@0e@& (ɹYCo7璳2+F2sy:i)!Fs\'q:"oܟBÁ$8$.yPdm W|-[^3,'x299!';Abٺ?і%(.CVk%>fOƒ _!e0NR[eH{mW< QRqP#Qx Qbp|288`מ.]Ҩ p'N=[w!>' ' 9r(A ,p|EL"JcNw_PY,x8:t{dyH烏EF~1ZT1GqZhT [ ɂ VuԫzSRzU==.ebvs NcQD@19RxEB kjɖ5䦃/@)͛-`Z'Z~r%+W^XZ~_n|Y=8K|-R1y"D= &,sY`J-#Ќj% BAs@& ;-CaBt:HͭP'BhY]k@r ljn1bD) \IdR`X-RQBR *$zǭpcg(h\(cX`zˬB]4ڐW QXH GhׂƢ!3$zB|`+I4 Gg'()w>!DE+l3 Ѹz;s, @KF"%8VX4AQL3MRZ!JBTz*0qb8N&kPpٻqc%ٷKzNUl| A[]q^EɔM]S_n4Djen#[,K` b-qۛ"!LC"FD!L;# 0m+\iaicŸK5Fs8O$Lc#$vWv&Sn̾t:4]\|gp4é"=1*C1z3 LBYC08ۜ/qߤ3cgջhf_]ZbV=փV}ݺ+=֏ߒo֏{k#thC==P<]ip8^lp 34+) { h80n'>3( )zDP,S THw}:A#quߏ7ZƕgN% jȏ4Щd #ȃ"j?"%4CSpsѢDQ*TH"EJJ0hu vRH]% Ŗ2# (\v8 з! 'M4a%kn#%!SY^4#I0ejkQC*F B`}U5$9;`؁:j.e"TM1%Tn4V3 JT H]IZ%&D_P HR 0APU ~ XFKKb,%+ޢNJF0#GK?U+&W@-m+Ɨ ,0-ƛLӇMh#ICi.}\Wy=X ֮XIșd-؇eFn36Rp-80'(sZT+pTb],cQbU{T9W <" /9怽/6?/ڋ^Ͻms/sv>FywS iVq=3;ؿ~$4_I|e6lUa _M޸Z?R"؄)2'D< HG\8s`(j>>yY(3>yI)v8MW [j|`adc>_UIU)9:/%|1sfUx&YP^DM$KO%+#هwWK?wtDbW+zntvf.lE7\ ?pNfw߾ 0v6 ޥ_ d00mKKVQc k[Qcj%%WwtH; y\z%'}UD#EO)C5!CD73'ԅbwh%OƉܜݙנ +4|W*yFI* ĉhoR~t<8%O1*v(X{wC?IúP3Rfo6.'X!,w3fDiٛN# p"HPr Aʺq:3[e/fgpryq# (ά7e1Ts004X3bjdXcC#c,(K:+%N|UÉFǧa 2?qNjE@W rՊ\!?l G1ݘ@wi&Rj[-4yVZ6? [mu* ~+X{AoZr=9ɠp28w ou`#H>Sآ Xp(R% \츠$'#Ft8 a;E=U n ^ :?̷4תowI,O8cjyYlᜳq9xy~ u ƣ7l|~+a>L 5.9G5grZ.G@)s5*f4,yZm)| :?g%"lgi:ou猩"5L\ 3[-3m+&7C&-푇痔Ç%Ҵ-OyvF3F[(y Aзnggfo&n ȑSg ښ%K\5m<;?t?x?\q`Ï3\@:ܒT3c>ȭmιR36#ru\,&1nPis6Gp2ps~ImCZsϊCa^=h=1 s_Cw;b!]3 LJFAiL|HC,]?ݲpyw6'`v%%n+ZŻUx4m8Ol8I51>&'q9WBU hZD#VV}Z}{]XiRd!"$/ A/)Q`[@3y?ǻ_NB\87o>}>;](cmΨO dgOC[jLx4|]݌h[)tG4nǣl?VǤ@=k) =d صo'ԱR܁bR&_rSwȗ9}'09+Ɩ ߌ|hD뜧wwkΞiuj5B<aWf7A(-d>3 ]© G|s d1/c6HfQ0|,,wJk M|&s0A_ӾU\h(!n.B8bLJ0^Xf {e6W.m3+#1 I81 ΘY&E9rue*7(JvljOgK1ϊ)DգRZR< Pvk4/D,Ja_UY,p:I&<9a!//(Ǎ$< ~'#Jhك5O G[ n aį>TCA<;%_S8e UtiaZdIe!ĘnWA~I55js_Z&9iTS4//8q%K*@La_C]y@ə'۟Y~y[΀ `2eX5 7&]|5 X }JTX} 'zE`MWp^sQu|LO3=p`0r% '!Vr7^nUnx`0Qch(.䎌 Aeac6J*0j_&~~ 7'q{N݁.EjC,yb12@[fcۿghp e o!WLn直ۃsK^vMUSj7^W!_qT WvC>CZ(濇Ls!:Ջ?z!%ig~kgNge f#7Yp&"N(d$A掄ʈ)'aak_K[']Fx0vEeXH 9-,ɒXBrϴ|P^1x,ҹ{i_)őeϗ+bNF!H=D1wn1ߨ*r!; zsqZ . *k0{8G-MGĺXH+fF`Ūo&$_'̖+|09hC02Nᯞ-Ͻ,Jݼ) 1BsElUa _M>RiHD\bXȜB "mLWw5_ARAFڨWSe+"¾Õչ$׳Aӳ9.OZN~ k@RWOheJK z 23‡KY^]J,TxYq.%/LѶ^=' 1?"(HDĎPGʐqVZlkq&(#,18RE NrAY/>քkt]LO ’SZox%J|W|% PL)8JI Hf-|K.8ezuUWw%YnǂI!+c2BFS}yd"X*b&1&F#(C\31(01! hcd1ז¤qELDHA!""ĀJW@Ξ;u\)Pews$ ; ME)I&Xk5NJ21 'JSད:HcB-w'#Q$ , (S` wm( ]4!2l{&{j|$mzRڬK=.FO7eE:]?B$U>X]jJe0H4zCBIb x>O V7,㱽Ig~Ys.U ^,ۣO֝\09<K޾Y?,} v?y0U "BD|==ft%!xW=x_e/ߙ'6 Vl'FP)-}w;]Xԩ$k_@p9a Yˌ*W b)"~l1NN/܂BLڵ17:6؆84d,"aL,/ `LK  '`\T{P "?v80! '4K f62[^ANJBN0O5UcI RkZ\L{`&]Lg%LHPu&̷srͿ jò5ڎƺ|8ki rt t3SھX^m4TCwWo~?}yu%n Etjš\qιXLcݠ(G)D66~tF+T+#ۉ}o F8Ν~z];:}L qwk-̇[lhK}cUV# NW dP4ul)` v2uHs Pj $)c&\XJI#.b7iBgZ㸑2^dmX;b]l6֮,)žEkz="؁[vWbyI"|Qp|OZ7˟MB+ ĔJ0%Px6)0ϵfTBiu%a9ʂe}q1pQ(>A&HnjStԐk((Z5²DFH|Q+Z@5B%e-?ebb9z<n.qc^in JP0%y(PH.*+$Ga#c9v51[%( ;dM1a7;U:E Zsw/B1TS`{rj%͖j\:>\xDۄnOIKƠry1E;֔WzQ ^e+<]-W18 vC){WT^U?os/vswNbfPj -D__5X|z\DdۄM7aOR1wh |a:ݒ'\DSd9ΡvSFAb":nZ?3#)KA~{lEuNTWr$gJM U/RmD$5`HAN̉#Y#s(g) %c=}nOŕ$#[w i) z []X$SI\sˈ#F@KȽ FբbN+ sL=\Ot!Q>^-䛍,$"zOuxv-5 '2ysDH$ѱ [i/6K5 鸪AV5#nba{, (3&$ᶬ=İGOSM!qI%QS°2#\GdQm6]{۱GgQ*NrG44# v6>:>i b^,|7830m)p:+nZ\;;B\T On0Q` z0 ۠p."݅ ck5(&2}!Ms2*BhsM՗T3:wG)ՒE˧'ÜcgOtýs TG-b#(P:4A-1Zp_:0e-ˋ< =yGM6dM5&vP]jq`qFt7 %LL!wc(ٙZdJw}is%#&un}i+Щsq`\hL5s4'nN;bۄ TѴ[z4~vCBrݓ-^4;6Trm-ɵW s xOw'+;(i#C~ў 3"{UwXk9G k$zB5z}`D E}w :RE5j-o.<(Dw| ?aHư|4GuJ"R!nO#Ҕ7G@U`&Ru>W=Cz`m0v=$+$h9nsJz"AwD'o\ vm4o{B+10R-7w93\Fr>1FQhD&T(h*g~ʌġDpQ 1Z]XzL-1^ϵ"eHTY#CrY%!aPLgTS7SֲI JՠC㉸ϐ!qa}F- ~r TDVF㹻kD [1kȥQOX%O]nLq{ւ$'<&/!l0~Ozxlp5B>lӣ38}e;DpTᡐ\u&̯vLmUgb&ʆu(O(`fϋ}E{)2Jk-v *sT-TgԓqG, ).u9cE\$6ń^5\8Iwa'$r}pp.*j4a_YՑ)[UqA1Hr?`QiK"5Q$ϻ%j߼Y$*~p&WRXn3*ȸb2 \Hie%,FSU7KGq8-)hTsRCmLD\Clj2ŸDQ0j˗lE8%YH(2;I% ؉z"*<éRR0.* -g.zU(ч߽zA>Hiu0욬N/Diw)yL|VKWّ-Mah)XyYmJrZ!VykoVRx lUR{b}9Kc&doja˫7N5#,O$ǜ%Ǝ? IdEdCȞYѰabDB٬$4*9do}NW*f09>} GXBl*ju€ftͽ$)35\%fa{dSn=GgZd\)L*&$sL#2Ԙl84#Y,9%!f4z!$dCTsIo+_DҩYS H~]}*\- tׯt %59驡 ןuFd2%gYGH.\{- C9%9LL3W4`pʥHLOZKrYfq\M sŗkwuHEL򠻻^Iurb2Θ̴"ag|[9=r .$0|*t6 K"\Vw5|ДJ&p%__ ,jѯkTX^i,˕/O2ľ$]%*}הn!)ÓɼtB e$ dZ VhPJ+ *[wR_b9uqlJNWEyU$X4B'.ֻ]vAfml}YGZ[qI($TIhQ^r&9JJf|y#A#+4S%3MLCwUz/x{wmW}~WSsHc^+em^k _r ϛˈ/iM%%dM`%\z}۫ǻπ˚W"Uoz~0Cd~Jςwr&#;~z+V1Hۋ/EEJl> *@ XD0 ."7}nZ/-n1Fihkz~K|ۇ? M݋).(^/ x^]aWzrwO_,.AlﯯP\?m,_? . => &~o @ixφ{ RRgy~Zo \~eŵm'W ՎpN?󦪗xQ !{SsBFZ";@8^,8տ+>\gun_<{pw-7o'Hx1 ^(0bb^ztE@O޺)yRPz]HMGO]Re%ENQk׆V6j=PS!} nz>:3}{y͒gx DB)ʹ*uUJh9}SVqLgmQquX1C=W YC 9S9P2Nj֣]I"2WxO>]!hF\W9;9/wr_NRi j9qThJRyS0k\Sk| O],(G{Im^P(вTs nrJl8j imɘ6 Ux0 6 CiMUJjO_"۷׏l(_:Ī:gX:cf0F.b;LlO0K4Ԡ2R;djGA t@ 4 t)RLABT(EB䥗9Sؒ>l*6uQ垨wL<#Z4o̦dPl/2@#1nQHkBGKw [1g 3K(G[C )65Nq -Am;ghmn%} G327}V˺̲W|t9ͅrm7}be*옾2p[-aSY_Qg?k+Hk#UgSMקRWOЍ_~ n~zt.'1Uվ:Fq2磳ˋ{!NAϋCF};k$?Idpō@;{(cKfv4F\h|L/8L/]06c `$Iݤ(QՔ4Vwem$0^+Pl۳ax_4SkPٍx4q}bB1$fu֗YyTefł5y?O~-Ч|=鼼wCDiukCUݶہ8pՐwlkBILW?=q:{H}Ub#_e-j=/G-#CTOfs oE{c?L1͛ :N-+hfcoӯyar{SO_Oձӕ5)UD{vZ-dWo;F ?4ͲXX|**57l lisPlH^4M(dš>.ǁ!áCikڑɮef`sÒqbELXdHk矿gG1b;))5е'"}]~^U+ b-Sĵ x{S,j j =A}^"m(Ka-C_J%@Z+^|q Zk7ta|tɛ^@+Ssy>9ᖂʮe!tFN}ec id\sSX4IDt+o,VANPtXҺZ0+B[m:ݯHLf~BHdϢ'P7|(QkJ?,Zy׎ e">񚵹ILDa]7ș(Ɋ\y9 lTe%l0;WҤ>mgߞ_|xxE.}bL+N`.&/ Or#8*)oA1+Od.&LǓ/6O^ LS$jm0 5 iy0[EIdUBPI2 O,^mxclT2F-f*{|ap3ȍ<{A k>o4BxdV]k?*gWNew5;aN!b wWvS?֒ўZx=igw5b}mL]`b{XgO.l~}}]_SB%:>i_р[Q{ԛ!AG]574Aw LP7L4e庼gWzkXikH7mJR!z%z <7O'[O]yfI6hߞO!N=o*1~lB?y$2F?XLxM<#4"&_0_ϋ4{v?ɧsF[\uXZ[ۅ,D)kOqVct!~H?.S.e FݩPo]wR@V~Y5I{agN2}4<4OܐWڄ+S;h0ZZ4A}d۠ojux媻{]e@ebܥUٻRH?}Uގd֊hEUDϷq'.ժf­'>}| %ۖFwR_70˗.?N("+l%s*A" Pmxc2khQJ^WNqq_؉Ghov⍐18r?D8SK|BcIea󢓐`(TBqYY\E fٞ]0@=Mn+8QN^;A`%Wg C 8h_7ltY;}RRTQRQ!!U ֢2T> F{d*yfDE~g/"!G>Xu̵TN,lH؁1re|Cn)a (:w4dQ=[oDvPڱ ^qѤbAzLj}#j&yoS7@2U gliEcliPi!*tDڕ*[E"lJJXAN`Tx3kIyZ(ژC?k81D*K&%ZIk wL0剗*B=3<1LE#GV[E0F# !kף5h-Sr&V%T(GC[2 ZJf < E,M&IIҠ$7lSЀl "C+TZKtbPRMJtt V޼t!!"Z.S])9}6JOOCUF} 98YIWG~\bB7 ~̙}92g]3F@iEgGs%1Qr 1ЅAy-FQ'8C&ѭj=kχyC~CEh.#/*OھكD,3A.g,Cs Qµ趰 뗬X%"+̢Q}kCb$Eg4N>dF$!I !@Mh%[_n'_dѨbQM9~OV_}㧍MD;_$&v9߬CJymJgpM4˞|v蹓7՗/ ]zjx;'EmeRVp@mC ]l䥻{! 9wYض%#kt4zEc['_0¦ɳ3?9Aoe\_E] (5Gy0Vڀ% hhHH CA:MtRd &6D!rbD P:ٓ.6֣3P,bGrM(8Y-pЏ**I7@vck8 AEoa22qP "ƘC[JySHpVIjַ^ _ߞE|kҋ2/ӹxw{oEE~pyb?<}G xY?xt$ $c?>>vK\jٽW22~Ɋ~'g?:t ;BwLJ"yϮ)#@=?ٟ* 2?xPhMС}onpI% `UF(<-&m^|; #{bq2!l]esT2!9~7}ߦK\<9 9ȶ;Ele[ Z֗&XFxǛt6V6Q v*+X2 46nߤ+a&-44v6ɞ Ychn"QBay] T+ds62A5be<:\nkEz>5JBTKJPypZͩdTCa+W_Fwt(UpFMXoQ0 2pʻ< 5{hr*Y'4PhAΔ ͓KhY, 5 )f3k<xEiuS@kD6&'n52]P+}B-Thp1/`iA5H5Ftq'dP( 㽤#PF C="A%N,S]}a^QFTpvw$Gc+|ZG{>)C)&N<:WGn<㤹PIQU=kIB2N&j%/XدQ H " )⇛ې5@wLZ(. 1B^ Q#F.PLRRA941^i6BMWJOh(Gu+2@a  ?jWZO|Fkǯ,EXLDt/gښJ_QecgPeU~ݍ阞ٗ2|HAVC旕SY! '{#)!'9r &p} ';dk3l%>S&+H8l /e;PcMYu=ZBMKx܁t#SZlUQ-3Ie[n5zBh^ӵFR+\uOw쇳/?ٓBۢT`M+//Za@4d@JZǀjȀIiAA{'d,G9=L[J{07O|%9|F_38 ?{n|9&_m y3Kh9+B)nՁ?RT{RM!ruu{N/Nӵ`W7tٛ[nq8YW؆mǺ1k%;*?G4|ҎZ+;j~(p6_bp_6\ots ןzWG_jVܜmFq'v2ZiDߙmxgeyɥI5Av؃W=_28':$˞ 螶n ?6>ji7gpc܁{}8@W~RRrTEe=lM]AiH|VI"]Da$ z151lcN5:сmrHlJ^WX1ۯJ:5R>Ar8SgeD@AM!* <2?a6LPHZuCɴyVH ra`DrWvHsLĤga57}@x]m(ox- C_@hc/I ]Ԡa0J g4 wa0WiaQIFWSdy!կz0:jՖa0:4>XZRa1*VYfLːe! ʹܥFdC V+Ly2 Z";śy)?- Jiz ͬPja3AJz w( `G?ˏز_SɚLM(C}an9 K;\).姏@Ҽ+.~ws1MQo7LdRԉ2efmD^!:יw~X>G< 9!҇<eV%,ZO_ǣ).$ױ[22QsԶj78`de;mȾqr8:i֫dTK_FJP2L2@ dJ|[ 2Y = b  ӽQH΢H ?"V,I:2;Zd<ݒGO'bP 8Pd\, Z_LlvPQ J !-0R+PA[ң 9z#w~]q39{E+t()(:2Hw w=YOa%h4OWy!Suy9+!k@KgOGPQ`y21֣)N!rgt2dDgw/|CQ|P):>~%IYvL$5dXum);q)ѤDO} bdV7ش*]kOM @2[a>`"gD!?i-3I"Ʒ,{$gäE( yP -[V w %m3|BH֙>+1ST"o' GT,;݇u'LZ9xrM;"'%(#rIR(RDPGP] l >)% EPMQHg2@Ifֆ2"N#n7)*@rQ%Z2l);,9R#VbM?r]0`bb_ ["qHF dı F]H=Z JR(~:6oJhK}(YJb\H#B,E$tJxW9J}1:kϥɻ@ @Aީ#"12*nVuF@ ʤM6BI=EI."-) γ, 0j4(WԳǢ(DcMS ^o'~y;EiRc0 2!<3i%4WL56ATtCFɈ~Aȷ9H6A[CWN%g} =2=I&Zj*Q=UBH`zMyԱp.  =4ovC\e7Q&^(E <& VD|{$Qe뇲Άk٤i' 2ѧt<#-fh=&l0-⃽G.kT'n?6}w:~6`r!Q͞N|_W*31 J ARJ< V)\BmowA∕QWX|f_" F4p%oZpK>э두='zo|ʾuh875=^$sYњ0~YP w8kN׻pn]\4.~6inKޟ'U}<.u= t_[آ*Vh{ [F >ӧd{XaphIAF3vk5+guMix !FJ =Uj($S2fiK*ݍ akBF {f䱅k He^LJ)1+,)pXdAz9s2.9v ی}a{D;\b 3 jmw]L"8z< HCP,S^¯oXQ -т#S|Hh&fr)}'}<3)E)QB37;u r?0mGȒSWJ0#j*'6kZZYŶ[mZ6|@+P:Ǖv? ŸOx,\զE/JnM͞%x *{d+P\*g^x?2OOW_pDlDPRT2CY:+x;=ʃ&߰u3{wft4Ζbo\,F97g׮öF"z;2ULmƷlЫvJwTACgMVq}eЕ~Rjr5;{(={ˁ,]iME?L(•Q. zRqkszn$>Ozؤ>3vrp>Dr2sREeFUCmV_6n(~NT [^,>KwP`Ss\~=qnc/Eej-6-C[7qVc߿r9?~:'2Ζn8.}<#g7*=]k8bg觳S= CnhvļKx F#UkԊ[nOI0g3 tߔV]c) LBʔg?_.*K+~Z[oS3TBA9ILISa^CaclV(vVNX~EdQ:/=޼Hs7Ouwi^c~yiYP`֤Gv2:DtFprKqHa,;xaz Ɯv}s|P&L8z}0ֆ.ys;@7<J4Ҭjȟgoj|ր)9ټ-:p?4՝R^DgiUV\6AԹV4uiͯ($Z,O?T1<[Vyy;A>{O,bo1J{VTy^DZHCqSd7{' [_tQźbe^u4pG6|*K:uOu˃.X"Ѻ?Ѻ !߸nTǂP J`ݦ_*H $Ŏ-N,*{6NΚ/r-IcE,,kKN+CHbˆdT; aO$|A`D1EC}T hzR&68dId{PRDԂ.hcDsm ! TtX~ꎇ jl黥6Ij]Ӹ<$Ed\W)Ũ,ZTSf*Ր%lhO&,OtGϋu6 :6+ahk @< */?z?l|ĉ`(`yx{gP'mݒ|ÕT}.Öl |D qW/2gxLpg%pǝݛEbrH˓61 (9x)D{QE5? *B$~'('X "B27v<Ըqܗ:)AZ|3?O jt}?yrYBs9^T1<~Y:s0]  /DaB" Z y_S@VE-2Myb&ꔂ 4\ڋIyԎɈd)f֓I@Qg)ubm0Z cم]&匌T TE4ɜ0piqqoXbKxqw2?'#*k9T|99%b0ĔI{{ظ}sRbgL֞ JjZXW -қSB +nwTV>- Zȿ :C3 s'_61 2⽟\g8F(|s9Ѡ>V3^ĵ)Ɍ?/v⇐LJi}(6D(R{$`6/ h&hSWkr@!~,YtM&\4x󑓲PXaN!,& r.duZ$œ)(@avP#4K@N-}!Ie`ApGWV{G⸸]./Y^.ͮ3 r$:KGz+:+jX^ϝG!:Bfj]uX^7-AFVyMi C.|Qόuq>X])%Ǥ1S@/z5ʞ(Y=%jN *\ޤvС-jUy :F{tl„I΅S”('JP#j:/n[#@sSPV&&""Z7ZJLyГ69|Nx-fIgAeʣP*wZW w8͞zTOHs%N"*FrfA"Wȹw|er]'q/ '>=X9x9%>HQHgcaFX27hS> jc \''ǔ Et)C@k ^?7TN:h2Щ}G~JO6flt RL L/=c؎2_$)򃿞5e WIa7&bEW|`Q`0˝df.xJR1Qp"uN9'EUŎ((=ikJWïm]1 SbPcՙol+k/b%Goy'"@Y (JPD?`W$fL%}4^,RD[O*wBJGJ Q2xkr *hރj! HP30Ip ) Iq,^LFw1'y2YZ{RM/J,5?E(Tp E `!sgQY@TQ)ZVT+%[e;QO2HH+_B,~*_Ŧ/9"1M.0012c(Ye M{"#G347IoQ;mOyM0O}v< ]\Z^ۯ:WyqS&k)>~~ʵ٪}ϳZgvf˛3/~{WW>ޏB5W k݊4gxsuˇ(=realVX/?`2ߵK9 R4ͷP&V$f% G  (drt;bī ($wM[wS=S3X>~Zu]X\w>jAwKXǻa^گon'w+M[/KYY$ͣP=9ϵ~)ZiFl=V}ُ? _,WM WT$-u/^Bi-PE,J`%gҲSGʐUXͿBQҀ>4V ߗW|Ur|b (6 X"X_%?|Zp/UÕ]}1nd"녒d#%ߔh{V߸.ky#nwm]F*Ӓ3-tsڡ%2a_o#.d 'qH= 'ͫ;ё`1CeVC ~kNn5R yXʨJ."bB%c4)(A"|T䴵uv_ѐ"(xVV,@a}ഌE}1%NZ r!}.fIL^ԥOӴ~Fkm_/וm"pwA.ʻf!7o[?5}jr2Z\}B^,s!-A2dRɤI{=!Χ3Wc?r= Z ʱihU]ݟ5{o5v5n.Y>?OҾa[ݳzܓ`ݑ`Myj-?lmV Aj4uoNVp ֩xQV&>~XݜswYxlز#P@rF^g;#F }hԲ F ֝Y5bdMrFm& 3pp}c(֭"eEQBI}gQF#<WQRJ¤dB4޻2_<;~&=&s tpmL}3w41y~{Tn=:һ]Eg#ui8Z/b' .=pn[7íԷ3-eijyfR;c")U*ʫeQd2'cbCI9VG^PuЊB[;qaf=~ˊZr#g/ kdZ- °_XXeV_txlkuhȨ?M6!c*,.AgNjT]`o;Jڐ& =v egC#˴9$$ê9dMj6[1N*$oB'CxôKo(μ pY3CdDMClvcwup6|9 U{;JEe.:1vAuİRq*E9XMT\Մ.`5, 9x:K?H%+囋қ+xjy~{$pIE.hg9E*f ^c{!to/ܽp`@zڬ)9%1N*f"+jHsP%A9/8ʧa@lmP$H^RROxueŒ5)jSHf5 * T{@ز5[h',Fkg)&>vLݞjg-|kg&tF]Dx},Ԃpʑ ʈ@56J#/9p0 ȼG ]kU \5] PT{LQT\e%k$ .((S*x " ,!W.OK[VfY[8U>;JäzT)ΏWX غr,L!pH>EOQ.x'GUNyPY&-׼׫Ltd''d^d7evKa67Bۃpӹy:J7l3Dɱb%9VV3U SvM|r\KNJwHmg-/nj&bw~5Áݓ9V ݓFV^lH!sڮaSP͖ 0"MЉ#o0/]n>*\Nbzs6QʴOa+On{{ J_ksW5YC9ծdqN]ۘbmg{򖍭V,4ɭўNSpxEA|_$OK0PWߝrY0+:o;9yOm^bsJHl/p& sw{|{Euf ^n[xLٯps,#i8r}1y|s)Vk!Uӏ /@"za)ĪKa.VU`CQlP-\ 5LÖy;CTD)jkPMBʄJB51ʱ2ݵ/O^(bS\:(9č 7Kkut_7:OʃhmnQs:G,NQ R[nLPz>g5jtVy{}l{lƁ9mKr\=Cm2&W RDEr nՖבj\ ]nH6V htMhn4PHTMu1d'PMՖX 2mr)ZRbƱ>{VFFcS5y]؋֨*Ŋ ȭIE 0:W׉Zӱ0j~QTےtD2 LNНsQ,:euUTMf[յU׊R@*sYG6wEB6b<2YS}f f8G:XT0RZ(A:jѵZ|uLn ѳ@5i &GI-JǦVŠM kTYllIV)V-S['d;&PFd,V/@"o VX=Sw<˳.]Q$xNFp~e} ,xM^G2nuvɦ%+͸ @<)I{ wz4A+o%灍`3`a>vFwiX"dV@=W.E:D,3OVǩL'A( k"OnC7r?y`)?ɲXe\ /zq gu89!XEi&"Ϻ=]|V2E"d/J6 )!dTF$X/s3z$x={ȜR[CWmV#;۶*;iM&*\7@33G}-8R]2؂6A) +QԒL:+-$*ob#NXZUKZgo#&'qO䫕iY%07c}e-Yơo~jŀhiՐeCe\l׷/WWZ$?^-S7͂!>8'-rNI9d192.168.126.11:17697: read: connection reset by peer" start-of-body= Mar 09 18:24:38 crc kubenswrapper[4821]: I0309 18:24:38.622092 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:47518->192.168.126.11:17697: read: connection reset by peer" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.482723 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:39Z is after 2026-02-23T05:33:13Z Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.666393 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.668140 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.672515 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" exitCode=255 Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.672582 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe"} Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.672648 4821 scope.go:117] "RemoveContainer" containerID="aa9619e9b01836c844fc131ca7cf9f1af8404d66341dddc47cc95b2ff23f211d" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.672837 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.674401 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.674447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.674461 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:39 crc kubenswrapper[4821]: I0309 18:24:39.675036 4821 scope.go:117] "RemoveContainer" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" Mar 09 18:24:39 crc kubenswrapper[4821]: E0309 18:24:39.675233 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:24:40 crc kubenswrapper[4821]: I0309 18:24:40.482510 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:40Z is after 2026-02-23T05:33:13Z Mar 09 18:24:40 crc kubenswrapper[4821]: I0309 18:24:40.676654 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 09 18:24:40 crc kubenswrapper[4821]: W0309 18:24:40.808783 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:40Z is after 2026-02-23T05:33:13Z Mar 09 18:24:40 crc kubenswrapper[4821]: E0309 18:24:40.808873 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.483746 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:41Z is after 2026-02-23T05:33:13Z Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.572212 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.572728 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.574610 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.574655 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.574677 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.575562 4821 scope.go:117] "RemoveContainer" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" Mar 09 18:24:41 crc kubenswrapper[4821]: E0309 18:24:41.575870 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.577236 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.681346 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.682302 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.682409 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.682441 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:41 crc kubenswrapper[4821]: I0309 18:24:41.683365 4821 scope.go:117] "RemoveContainer" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" Mar 09 18:24:41 crc kubenswrapper[4821]: E0309 18:24:41.683707 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:24:42 crc kubenswrapper[4821]: I0309 18:24:42.482677 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:42Z is after 2026-02-23T05:33:13Z Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.484036 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:43Z is after 2026-02-23T05:33:13Z Mar 09 18:24:43 crc kubenswrapper[4821]: E0309 18:24:43.622936 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.680355 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.680671 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.682165 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.682237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.682272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.695521 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.695674 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.696966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.697027 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:43 crc kubenswrapper[4821]: I0309 18:24:43.697051 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:44 crc kubenswrapper[4821]: I0309 18:24:44.484804 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z Mar 09 18:24:44 crc kubenswrapper[4821]: I0309 18:24:44.534719 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:44 crc kubenswrapper[4821]: I0309 18:24:44.536183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:44 crc kubenswrapper[4821]: I0309 18:24:44.536227 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:44 crc kubenswrapper[4821]: I0309 18:24:44.536239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:44 crc kubenswrapper[4821]: I0309 18:24:44.536278 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:24:44 crc kubenswrapper[4821]: E0309 18:24:44.540512 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z" node="crc" Mar 09 18:24:44 crc kubenswrapper[4821]: E0309 18:24:44.542688 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 09 18:24:44 crc kubenswrapper[4821]: W0309 18:24:44.599148 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z Mar 09 18:24:44 crc kubenswrapper[4821]: E0309 18:24:44.599252 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:24:44 crc kubenswrapper[4821]: W0309 18:24:44.665434 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z Mar 09 18:24:44 crc kubenswrapper[4821]: E0309 18:24:44.665544 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:24:44 crc kubenswrapper[4821]: W0309 18:24:44.765787 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z Mar 09 18:24:44 crc kubenswrapper[4821]: E0309 18:24:44.765923 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:24:45 crc kubenswrapper[4821]: I0309 18:24:45.484791 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:45Z is after 2026-02-23T05:33:13Z Mar 09 18:24:46 crc kubenswrapper[4821]: I0309 18:24:46.482645 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:46Z is after 2026-02-23T05:33:13Z Mar 09 18:24:46 crc kubenswrapper[4821]: I0309 18:24:46.533545 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 18:24:46 crc kubenswrapper[4821]: E0309 18:24:46.537584 4821 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:24:46 crc kubenswrapper[4821]: I0309 18:24:46.678667 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 09 18:24:46 crc kubenswrapper[4821]: I0309 18:24:46.678781 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 18:24:47 crc kubenswrapper[4821]: W0309 18:24:47.253435 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:47Z is after 2026-02-23T05:33:13Z Mar 09 18:24:47 crc kubenswrapper[4821]: E0309 18:24:47.253535 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.484198 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:47Z is after 2026-02-23T05:33:13Z Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.838532 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.838914 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.840666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.840778 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.840836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:47 crc kubenswrapper[4821]: I0309 18:24:47.841784 4821 scope.go:117] "RemoveContainer" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" Mar 09 18:24:47 crc kubenswrapper[4821]: E0309 18:24:47.842313 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:24:48 crc kubenswrapper[4821]: E0309 18:24:48.139060 4821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:48Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189b3f782081b981 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.474420097 +0000 UTC m=+0.635796023,LastTimestamp:2026-03-09 18:24:23.474420097 +0000 UTC m=+0.635796023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:24:48 crc kubenswrapper[4821]: I0309 18:24:48.484531 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:48Z is after 2026-02-23T05:33:13Z Mar 09 18:24:49 crc kubenswrapper[4821]: I0309 18:24:49.484302 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:49Z is after 2026-02-23T05:33:13Z Mar 09 18:24:50 crc kubenswrapper[4821]: I0309 18:24:50.484594 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:50Z is after 2026-02-23T05:33:13Z Mar 09 18:24:51 crc kubenswrapper[4821]: I0309 18:24:51.484548 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:51Z is after 2026-02-23T05:33:13Z Mar 09 18:24:51 crc kubenswrapper[4821]: I0309 18:24:51.541309 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:51 crc kubenswrapper[4821]: I0309 18:24:51.543263 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:51 crc kubenswrapper[4821]: I0309 18:24:51.543335 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:51 crc kubenswrapper[4821]: I0309 18:24:51.543352 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:51 crc kubenswrapper[4821]: I0309 18:24:51.543458 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:24:51 crc kubenswrapper[4821]: E0309 18:24:51.549517 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:51Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 09 18:24:51 crc kubenswrapper[4821]: E0309 18:24:51.549917 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:51Z is after 2026-02-23T05:33:13Z" node="crc" Mar 09 18:24:52 crc kubenswrapper[4821]: I0309 18:24:52.483330 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:52Z is after 2026-02-23T05:33:13Z Mar 09 18:24:53 crc kubenswrapper[4821]: I0309 18:24:53.484139 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:53Z is after 2026-02-23T05:33:13Z Mar 09 18:24:53 crc kubenswrapper[4821]: E0309 18:24:53.623100 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 09 18:24:54 crc kubenswrapper[4821]: I0309 18:24:54.484096 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:54Z is after 2026-02-23T05:33:13Z Mar 09 18:24:55 crc kubenswrapper[4821]: I0309 18:24:55.483777 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.224199 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:33520->192.168.126.11:10357: read: connection reset by peer" start-of-body= Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.224280 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:33520->192.168.126.11:10357: read: connection reset by peer" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.224385 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.224575 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.226370 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.226425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.226442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.227123 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.227392 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e" gracePeriod=30 Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.482554 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:56Z is after 2026-02-23T05:33:13Z Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.719621 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.720111 4821 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e" exitCode=255 Mar 09 18:24:56 crc kubenswrapper[4821]: I0309 18:24:56.720170 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e"} Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.484450 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:57Z is after 2026-02-23T05:33:13Z Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.726823 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.727557 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555"} Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.727710 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.728935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.728988 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:57 crc kubenswrapper[4821]: I0309 18:24:57.729004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:58 crc kubenswrapper[4821]: E0309 18:24:58.143189 4821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:58Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189b3f782081b981 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.474420097 +0000 UTC m=+0.635796023,LastTimestamp:2026-03-09 18:24:23.474420097 +0000 UTC m=+0.635796023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.482362 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:58Z is after 2026-02-23T05:33:13Z Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.550643 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.552345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.552386 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.552399 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.552429 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:24:58 crc kubenswrapper[4821]: E0309 18:24:58.555306 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:58Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 09 18:24:58 crc kubenswrapper[4821]: E0309 18:24:58.558585 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:58Z is after 2026-02-23T05:33:13Z" node="crc" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.730926 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.731982 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.732090 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:24:58 crc kubenswrapper[4821]: I0309 18:24:58.732119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:24:59 crc kubenswrapper[4821]: I0309 18:24:59.481913 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:59Z is after 2026-02-23T05:33:13Z Mar 09 18:25:00 crc kubenswrapper[4821]: I0309 18:25:00.482949 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:00Z is after 2026-02-23T05:33:13Z Mar 09 18:25:01 crc kubenswrapper[4821]: I0309 18:25:01.484414 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:01Z is after 2026-02-23T05:33:13Z Mar 09 18:25:01 crc kubenswrapper[4821]: I0309 18:25:01.551646 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:01 crc kubenswrapper[4821]: I0309 18:25:01.553966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:01 crc kubenswrapper[4821]: I0309 18:25:01.554015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:01 crc kubenswrapper[4821]: I0309 18:25:01.554027 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:01 crc kubenswrapper[4821]: I0309 18:25:01.554611 4821 scope.go:117] "RemoveContainer" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.483504 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:02Z is after 2026-02-23T05:33:13Z Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.748513 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.752068 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b"} Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.752358 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.753658 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.753732 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:02 crc kubenswrapper[4821]: I0309 18:25:02.753751 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.423070 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 18:25:03 crc kubenswrapper[4821]: E0309 18:25:03.427380 4821 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:25:03 crc kubenswrapper[4821]: E0309 18:25:03.428636 4821 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.482236 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:03Z is after 2026-02-23T05:33:13Z Mar 09 18:25:03 crc kubenswrapper[4821]: E0309 18:25:03.623262 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.677994 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.678210 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.679633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.679681 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.679693 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:03 crc kubenswrapper[4821]: W0309 18:25:03.739147 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:03Z is after 2026-02-23T05:33:13Z Mar 09 18:25:03 crc kubenswrapper[4821]: E0309 18:25:03.739298 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.758002 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.758598 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.760842 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b" exitCode=255 Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.760883 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b"} Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.760933 4821 scope.go:117] "RemoveContainer" containerID="98792320bbc9da0e5b7ecc13b9fe653ef2af8d731658821f3ed2421d7f4a6cbe" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.761091 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.761931 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.761958 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.761966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:03 crc kubenswrapper[4821]: I0309 18:25:03.762438 4821 scope.go:117] "RemoveContainer" containerID="275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b" Mar 09 18:25:03 crc kubenswrapper[4821]: E0309 18:25:03.762595 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:04 crc kubenswrapper[4821]: I0309 18:25:04.484890 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:04Z is after 2026-02-23T05:33:13Z Mar 09 18:25:04 crc kubenswrapper[4821]: W0309 18:25:04.721945 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:04Z is after 2026-02-23T05:33:13Z Mar 09 18:25:04 crc kubenswrapper[4821]: E0309 18:25:04.722050 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:25:04 crc kubenswrapper[4821]: I0309 18:25:04.766748 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 09 18:25:05 crc kubenswrapper[4821]: I0309 18:25:05.482016 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:05Z is after 2026-02-23T05:33:13Z Mar 09 18:25:05 crc kubenswrapper[4821]: W0309 18:25:05.502352 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:05Z is after 2026-02-23T05:33:13Z Mar 09 18:25:05 crc kubenswrapper[4821]: E0309 18:25:05.502452 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 09 18:25:05 crc kubenswrapper[4821]: I0309 18:25:05.559614 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:05 crc kubenswrapper[4821]: E0309 18:25:05.559633 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:05Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 09 18:25:05 crc kubenswrapper[4821]: I0309 18:25:05.560909 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:05 crc kubenswrapper[4821]: I0309 18:25:05.560942 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:05 crc kubenswrapper[4821]: I0309 18:25:05.560956 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:05 crc kubenswrapper[4821]: I0309 18:25:05.560982 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:25:05 crc kubenswrapper[4821]: E0309 18:25:05.565686 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:25:05Z is after 2026-02-23T05:33:13Z" node="crc" Mar 09 18:25:06 crc kubenswrapper[4821]: I0309 18:25:06.484548 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:06 crc kubenswrapper[4821]: W0309 18:25:06.544417 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 09 18:25:06 crc kubenswrapper[4821]: E0309 18:25:06.544485 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 09 18:25:06 crc kubenswrapper[4821]: I0309 18:25:06.678537 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 09 18:25:06 crc kubenswrapper[4821]: I0309 18:25:06.678629 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.096445 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.096600 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.097653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.097706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.097718 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.483914 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.838445 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.838804 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.840297 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.840347 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.840358 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:07 crc kubenswrapper[4821]: I0309 18:25:07.840880 4821 scope.go:117] "RemoveContainer" containerID="275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b" Mar 09 18:25:07 crc kubenswrapper[4821]: E0309 18:25:07.841051 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.148547 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f782081b981 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.474420097 +0000 UTC m=+0.635796023,LastTimestamp:2026-03-09 18:24:23.474420097 +0000 UTC m=+0.635796023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.152942 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.157284 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.160670 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.164152 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f7828e67b93 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.615241107 +0000 UTC m=+0.776616983,LastTimestamp:2026-03-09 18:24:23.615241107 +0000 UTC m=+0.776616983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.168585 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.652240161 +0000 UTC m=+0.813616027,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.173043 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.652264821 +0000 UTC m=+0.813640687,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.177205 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245aee3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.652278582 +0000 UTC m=+0.813654448,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.181035 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.653363182 +0000 UTC m=+0.814739048,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.187535 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.653386432 +0000 UTC m=+0.814762298,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.192693 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245aee3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.653398082 +0000 UTC m=+0.814773948,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.197891 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.654870098 +0000 UTC m=+0.816245964,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.202941 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.654887258 +0000 UTC m=+0.816263124,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.207997 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245aee3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.654900368 +0000 UTC m=+0.816276234,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.212612 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.655278785 +0000 UTC m=+0.816654651,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.217651 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.655295225 +0000 UTC m=+0.816671101,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.222964 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245aee3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.655312005 +0000 UTC m=+0.816687871,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.228973 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.655644602 +0000 UTC m=+0.817020488,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.233951 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.655670663 +0000 UTC m=+0.817046559,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.238597 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245aee3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.655687623 +0000 UTC m=+0.817063519,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.244987 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.656471736 +0000 UTC m=+0.817847632,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.251385 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.656493297 +0000 UTC m=+0.817869193,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.256570 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245aee3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245aee3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538986559 +0000 UTC m=+0.700362415,LastTimestamp:2026-03-09 18:24:23.656511267 +0000 UTC m=+0.817887163,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.261185 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245a5726\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245a5726 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538947878 +0000 UTC m=+0.700323734,LastTimestamp:2026-03-09 18:24:23.656968795 +0000 UTC m=+0.818344661,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.268122 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189b3f78245ac044\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189b3f78245ac044 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:23.538974788 +0000 UTC m=+0.700350644,LastTimestamp:2026-03-09 18:24:23.656992465 +0000 UTC m=+0.818368341,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.276530 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78432e87c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.056170435 +0000 UTC m=+1.217546301,LastTimestamp:2026-03-09 18:24:24.056170435 +0000 UTC m=+1.217546301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.284616 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7843578da1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.058858913 +0000 UTC m=+1.220234789,LastTimestamp:2026-03-09 18:24:24.058858913 +0000 UTC m=+1.220234789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.291223 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78435c9ec2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.059190978 +0000 UTC m=+1.220566874,LastTimestamp:2026-03-09 18:24:24.059190978 +0000 UTC m=+1.220566874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.296727 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b3f784430b992 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.073091474 +0000 UTC m=+1.234467340,LastTimestamp:2026-03-09 18:24:24.073091474 +0000 UTC m=+1.234467340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.305187 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78444863af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.074642351 +0000 UTC m=+1.236018207,LastTimestamp:2026-03-09 18:24:24.074642351 +0000 UTC m=+1.236018207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.312944 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7867499555 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.661923157 +0000 UTC m=+1.823299023,LastTimestamp:2026-03-09 18:24:24.661923157 +0000 UTC m=+1.823299023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.319454 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78677ad374 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.665150324 +0000 UTC m=+1.826526180,LastTimestamp:2026-03-09 18:24:24.665150324 +0000 UTC m=+1.826526180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.324755 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b3f786963b936 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.69719071 +0000 UTC m=+1.858566576,LastTimestamp:2026-03-09 18:24:24.69719071 +0000 UTC m=+1.858566576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.330140 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7869c80ae1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.703765217 +0000 UTC m=+1.865141073,LastTimestamp:2026-03-09 18:24:24.703765217 +0000 UTC m=+1.865141073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.335420 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f7869cc7a2c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.704055852 +0000 UTC m=+1.865431708,LastTimestamp:2026-03-09 18:24:24.704055852 +0000 UTC m=+1.865431708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.340750 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f7869cd361b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.704103963 +0000 UTC m=+1.865479859,LastTimestamp:2026-03-09 18:24:24.704103963 +0000 UTC m=+1.865479859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.347216 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f7869cdb100 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.704135424 +0000 UTC m=+1.865511280,LastTimestamp:2026-03-09 18:24:24.704135424 +0000 UTC m=+1.865511280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.354437 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7869e152e8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.705422056 +0000 UTC m=+1.866797912,LastTimestamp:2026-03-09 18:24:24.705422056 +0000 UTC m=+1.866797912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.360487 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b3f786a52d3de openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.712860638 +0000 UTC m=+1.874236494,LastTimestamp:2026-03-09 18:24:24.712860638 +0000 UTC m=+1.874236494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.365344 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f786a64fef7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.714051319 +0000 UTC m=+1.875427175,LastTimestamp:2026-03-09 18:24:24.714051319 +0000 UTC m=+1.875427175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.369892 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f786b1cea2e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.726104622 +0000 UTC m=+1.887480468,LastTimestamp:2026-03-09 18:24:24.726104622 +0000 UTC m=+1.887480468,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.374836 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f787f17a85a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.06130441 +0000 UTC m=+2.222680276,LastTimestamp:2026-03-09 18:24:25.06130441 +0000 UTC m=+2.222680276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.381542 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f78800c5a29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.077340713 +0000 UTC m=+2.238716579,LastTimestamp:2026-03-09 18:24:25.077340713 +0000 UTC m=+2.238716579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.389067 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f788022103b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.078763579 +0000 UTC m=+2.240139445,LastTimestamp:2026-03-09 18:24:25.078763579 +0000 UTC m=+2.240139445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.394157 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f788be75c6b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.276243051 +0000 UTC m=+2.437618907,LastTimestamp:2026-03-09 18:24:25.276243051 +0000 UTC m=+2.437618907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.398580 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f788cc1e3fc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.290564604 +0000 UTC m=+2.451940460,LastTimestamp:2026-03-09 18:24:25.290564604 +0000 UTC m=+2.451940460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.405689 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f788cd4dda7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.291808167 +0000 UTC m=+2.453184013,LastTimestamp:2026-03-09 18:24:25.291808167 +0000 UTC m=+2.453184013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.411916 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f78981bc617 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.481004567 +0000 UTC m=+2.642380433,LastTimestamp:2026-03-09 18:24:25.481004567 +0000 UTC m=+2.642380433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.418171 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7898cf6869 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.492777065 +0000 UTC m=+2.654152931,LastTimestamp:2026-03-09 18:24:25.492777065 +0000 UTC m=+2.654152931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.424836 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f789d52dd06 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.568500998 +0000 UTC m=+2.729876854,LastTimestamp:2026-03-09 18:24:25.568500998 +0000 UTC m=+2.729876854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.430637 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b3f789d6218f6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.569499382 +0000 UTC m=+2.730875248,LastTimestamp:2026-03-09 18:24:25.569499382 +0000 UTC m=+2.730875248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.434198 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f789d92d1bf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.572692415 +0000 UTC m=+2.734068271,LastTimestamp:2026-03-09 18:24:25.572692415 +0000 UTC m=+2.734068271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.438779 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f789dde61b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.577644472 +0000 UTC m=+2.739020328,LastTimestamp:2026-03-09 18:24:25.577644472 +0000 UTC m=+2.739020328,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.440880 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78a962dde4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.770876388 +0000 UTC m=+2.932252244,LastTimestamp:2026-03-09 18:24:25.770876388 +0000 UTC m=+2.932252244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.447015 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78aa5734d8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.786889432 +0000 UTC m=+2.948265278,LastTimestamp:2026-03-09 18:24:25.786889432 +0000 UTC m=+2.948265278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.453005 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78aa70c7cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.788565455 +0000 UTC m=+2.949941311,LastTimestamp:2026-03-09 18:24:25.788565455 +0000 UTC m=+2.949941311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.459438 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78aa70ec41 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.788574785 +0000 UTC m=+2.949950631,LastTimestamp:2026-03-09 18:24:25.788574785 +0000 UTC m=+2.949950631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.466142 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b3f78aa7dd6dc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.789421276 +0000 UTC m=+2.950797132,LastTimestamp:2026-03-09 18:24:25.789421276 +0000 UTC m=+2.950797132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.472680 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78aa8037e4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.789577188 +0000 UTC m=+2.950953044,LastTimestamp:2026-03-09 18:24:25.789577188 +0000 UTC m=+2.950953044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.479713 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78ab71ab3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.80540089 +0000 UTC m=+2.966776746,LastTimestamp:2026-03-09 18:24:25.80540089 +0000 UTC m=+2.966776746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.485590 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.485712 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78ab855fe7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.806692327 +0000 UTC m=+2.968068183,LastTimestamp:2026-03-09 18:24:25.806692327 +0000 UTC m=+2.968068183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.489023 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189b3f78abbf60d1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.810493649 +0000 UTC m=+2.971869505,LastTimestamp:2026-03-09 18:24:25.810493649 +0000 UTC m=+2.971869505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.492414 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78bd3991d7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.103714263 +0000 UTC m=+3.265090139,LastTimestamp:2026-03-09 18:24:26.103714263 +0000 UTC m=+3.265090139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.496397 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78bd590ba5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.105777061 +0000 UTC m=+3.267152957,LastTimestamp:2026-03-09 18:24:26.105777061 +0000 UTC m=+3.267152957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.497902 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78be5c2bc5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.122759109 +0000 UTC m=+3.284134975,LastTimestamp:2026-03-09 18:24:26.122759109 +0000 UTC m=+3.284134975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.503228 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78be77dd83 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.124574083 +0000 UTC m=+3.285949949,LastTimestamp:2026-03-09 18:24:26.124574083 +0000 UTC m=+3.285949949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.511202 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78beb56a0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.128607757 +0000 UTC m=+3.289983613,LastTimestamp:2026-03-09 18:24:26.128607757 +0000 UTC m=+3.289983613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.520153 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78bec6c759 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.129745753 +0000 UTC m=+3.291121609,LastTimestamp:2026-03-09 18:24:26.129745753 +0000 UTC m=+3.291121609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.526077 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78ca8a170e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.327095054 +0000 UTC m=+3.488470930,LastTimestamp:2026-03-09 18:24:26.327095054 +0000 UTC m=+3.488470930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.534592 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78caa5ca35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.328910389 +0000 UTC m=+3.490286255,LastTimestamp:2026-03-09 18:24:26.328910389 +0000 UTC m=+3.490286255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.540373 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78cab0585a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.329602138 +0000 UTC m=+3.490978014,LastTimestamp:2026-03-09 18:24:26.329602138 +0000 UTC m=+3.490978014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.545290 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189b3f78cbfd5c5a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.35142665 +0000 UTC m=+3.512802506,LastTimestamp:2026-03-09 18:24:26.35142665 +0000 UTC m=+3.512802506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.553690 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78cc1dbf85 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.353549189 +0000 UTC m=+3.514925045,LastTimestamp:2026-03-09 18:24:26.353549189 +0000 UTC m=+3.514925045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.559582 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78cc2e599d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.354637213 +0000 UTC m=+3.516013069,LastTimestamp:2026-03-09 18:24:26.354637213 +0000 UTC m=+3.516013069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.567008 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78d6ef4aae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.535053998 +0000 UTC m=+3.696429864,LastTimestamp:2026-03-09 18:24:26.535053998 +0000 UTC m=+3.696429864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.573926 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78d7c45ce1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.549017825 +0000 UTC m=+3.710393721,LastTimestamp:2026-03-09 18:24:26.549017825 +0000 UTC m=+3.710393721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.580776 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78d7da7ba7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.550467495 +0000 UTC m=+3.711843391,LastTimestamp:2026-03-09 18:24:26.550467495 +0000 UTC m=+3.711843391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.586891 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78da97990f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.596415759 +0000 UTC m=+3.757791615,LastTimestamp:2026-03-09 18:24:26.596415759 +0000 UTC m=+3.757791615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.593135 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78e37ea082 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.74577421 +0000 UTC m=+3.907150066,LastTimestamp:2026-03-09 18:24:26.74577421 +0000 UTC m=+3.907150066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.598900 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.599496 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78e499eacd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.764339917 +0000 UTC m=+3.925715783,LastTimestamp:2026-03-09 18:24:26.764339917 +0000 UTC m=+3.925715783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.604449 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78e5cbf26e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.784395886 +0000 UTC m=+3.945771742,LastTimestamp:2026-03-09 18:24:26.784395886 +0000 UTC m=+3.945771742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.609307 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f78e6a64cb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.798705848 +0000 UTC m=+3.960081704,LastTimestamp:2026-03-09 18:24:26.798705848 +0000 UTC m=+3.960081704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.614389 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f7916a6b0e3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:27.604037859 +0000 UTC m=+4.765413755,LastTimestamp:2026-03-09 18:24:27.604037859 +0000 UTC m=+4.765413755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.619994 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189b3f78d7da7ba7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78d7da7ba7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.550467495 +0000 UTC m=+3.711843391,LastTimestamp:2026-03-09 18:24:27.607918831 +0000 UTC m=+4.769294727,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.624209 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189b3f78e37ea082\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78e37ea082 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.74577421 +0000 UTC m=+3.907150066,LastTimestamp:2026-03-09 18:24:27.814482526 +0000 UTC m=+4.975858402,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.628662 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f79234a9354 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:27.816104788 +0000 UTC m=+4.977480654,LastTimestamp:2026-03-09 18:24:27.816104788 +0000 UTC m=+4.977480654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.633138 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f7923e44fad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:27.826180013 +0000 UTC m=+4.987555879,LastTimestamp:2026-03-09 18:24:27.826180013 +0000 UTC m=+4.987555879,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.638042 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f7923fd8583 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:27.827832195 +0000 UTC m=+4.989208061,LastTimestamp:2026-03-09 18:24:27.827832195 +0000 UTC m=+4.989208061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.639748 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189b3f78e499eacd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f78e499eacd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:26.764339917 +0000 UTC m=+3.925715783,LastTimestamp:2026-03-09 18:24:27.830632763 +0000 UTC m=+4.992008629,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.645935 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f792fda3a35 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.026845749 +0000 UTC m=+5.188221615,LastTimestamp:2026-03-09 18:24:28.026845749 +0000 UTC m=+5.188221615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.650957 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f7931082598 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.046632344 +0000 UTC m=+5.208008240,LastTimestamp:2026-03-09 18:24:28.046632344 +0000 UTC m=+5.208008240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.656049 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f79311d1b73 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.048006003 +0000 UTC m=+5.209381899,LastTimestamp:2026-03-09 18:24:28.048006003 +0000 UTC m=+5.209381899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.663199 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f793f70462a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.28833745 +0000 UTC m=+5.449713336,LastTimestamp:2026-03-09 18:24:28.28833745 +0000 UTC m=+5.449713336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.668487 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f79400c6cb9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.298570937 +0000 UTC m=+5.459946793,LastTimestamp:2026-03-09 18:24:28.298570937 +0000 UTC m=+5.459946793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.672676 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f79401b50ec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.29954686 +0000 UTC m=+5.460922716,LastTimestamp:2026-03-09 18:24:28.29954686 +0000 UTC m=+5.460922716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.681661 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f794e5447ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.538161134 +0000 UTC m=+5.699537000,LastTimestamp:2026-03-09 18:24:28.538161134 +0000 UTC m=+5.699537000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.686853 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f794f031b7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.549618558 +0000 UTC m=+5.710994454,LastTimestamp:2026-03-09 18:24:28.549618558 +0000 UTC m=+5.710994454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.692852 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f794f172f5d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.550934365 +0000 UTC m=+5.712310241,LastTimestamp:2026-03-09 18:24:28.550934365 +0000 UTC m=+5.712310241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.697016 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f795a5fb876 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.74023743 +0000 UTC m=+5.901613286,LastTimestamp:2026-03-09 18:24:28.74023743 +0000 UTC m=+5.901613286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.701777 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189b3f795b415f2c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:28.755025708 +0000 UTC m=+5.916401574,LastTimestamp:2026-03-09 18:24:28.755025708 +0000 UTC m=+5.916401574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.711937 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 09 18:25:08 crc kubenswrapper[4821]: &Event{ObjectMeta:{kube-controller-manager-crc.189b3f7b338b73f2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 09 18:25:08 crc kubenswrapper[4821]: body: Mar 09 18:25:08 crc kubenswrapper[4821]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:36.678726642 +0000 UTC m=+13.840102508,LastTimestamp:2026-03-09 18:24:36.678726642 +0000 UTC m=+13.840102508,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 09 18:25:08 crc kubenswrapper[4821]: > Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.716129 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7b338c7cae openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:36.678794414 +0000 UTC m=+13.840170290,LastTimestamp:2026-03-09 18:24:36.678794414 +0000 UTC m=+13.840170290,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.720936 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 09 18:25:08 crc kubenswrapper[4821]: &Event{ObjectMeta:{kube-apiserver-crc.189b3f7b78b38dc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Mar 09 18:25:08 crc kubenswrapper[4821]: body: Mar 09 18:25:08 crc kubenswrapper[4821]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:37.838982598 +0000 UTC m=+15.000358484,LastTimestamp:2026-03-09 18:24:37.838982598 +0000 UTC m=+15.000358484,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 09 18:25:08 crc kubenswrapper[4821]: > Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.727030 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f7b78b460b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:37.8390366 +0000 UTC m=+15.000412486,LastTimestamp:2026-03-09 18:24:37.8390366 +0000 UTC m=+15.000412486,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.731654 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 09 18:25:08 crc kubenswrapper[4821]: &Event{ObjectMeta:{kube-apiserver-crc.189b3f7b8aac6fe9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 09 18:25:08 crc kubenswrapper[4821]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 09 18:25:08 crc kubenswrapper[4821]: Mar 09 18:25:08 crc kubenswrapper[4821]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:38.140506089 +0000 UTC m=+15.301881985,LastTimestamp:2026-03-09 18:24:38.140506089 +0000 UTC m=+15.301881985,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 09 18:25:08 crc kubenswrapper[4821]: > Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.736136 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189b3f7b8aad18aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:38.14054929 +0000 UTC m=+15.301925186,LastTimestamp:2026-03-09 18:24:38.14054929 +0000 UTC m=+15.301925186,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.744481 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 09 18:25:08 crc kubenswrapper[4821]: &Event{ObjectMeta:{kube-controller-manager-crc.189b3f7d8797a0f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 09 18:25:08 crc kubenswrapper[4821]: body: Mar 09 18:25:08 crc kubenswrapper[4821]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:46.678745336 +0000 UTC m=+23.840121222,LastTimestamp:2026-03-09 18:24:46.678745336 +0000 UTC m=+23.840121222,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 09 18:25:08 crc kubenswrapper[4821]: > Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.749411 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7d8798bcec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:46.678818028 +0000 UTC m=+23.840193924,LastTimestamp:2026-03-09 18:24:46.678818028 +0000 UTC m=+23.840193924,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.753760 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 09 18:25:08 crc kubenswrapper[4821]: &Event{ObjectMeta:{kube-controller-manager-crc.189b3f7fc08c9fe3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:33520->192.168.126.11:10357: read: connection reset by peer Mar 09 18:25:08 crc kubenswrapper[4821]: body: Mar 09 18:25:08 crc kubenswrapper[4821]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:56.224260067 +0000 UTC m=+33.385635963,LastTimestamp:2026-03-09 18:24:56.224260067 +0000 UTC m=+33.385635963,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 09 18:25:08 crc kubenswrapper[4821]: > Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.758762 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7fc08def03 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:33520->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:56.224345859 +0000 UTC m=+33.385721755,LastTimestamp:2026-03-09 18:24:56.224345859 +0000 UTC m=+33.385721755,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.763018 4821 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7fc0bc0b32 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:56.22736773 +0000 UTC m=+33.388743626,LastTimestamp:2026-03-09 18:24:56.22736773 +0000 UTC m=+33.388743626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.767018 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b3f7869e152e8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7869e152e8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:24.705422056 +0000 UTC m=+1.866797912,LastTimestamp:2026-03-09 18:24:56.747770275 +0000 UTC m=+33.909146131,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.772136 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b3f787f17a85a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f787f17a85a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.06130441 +0000 UTC m=+2.222680276,LastTimestamp:2026-03-09 18:24:56.930275323 +0000 UTC m=+34.091651199,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.776954 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b3f78800c5a29\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f78800c5a29 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:25.077340713 +0000 UTC m=+2.238716579,LastTimestamp:2026-03-09 18:24:56.971387266 +0000 UTC m=+34.132763172,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.778488 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.779237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.779260 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.779271 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:08 crc kubenswrapper[4821]: I0309 18:25:08.779742 4821 scope.go:117] "RemoveContainer" containerID="275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.779936 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.783749 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b3f7d8797a0f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 09 18:25:08 crc kubenswrapper[4821]: &Event{ObjectMeta:{kube-controller-manager-crc.189b3f7d8797a0f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 09 18:25:08 crc kubenswrapper[4821]: body: Mar 09 18:25:08 crc kubenswrapper[4821]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:46.678745336 +0000 UTC m=+23.840121222,LastTimestamp:2026-03-09 18:25:06.678594983 +0000 UTC m=+43.839970839,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 09 18:25:08 crc kubenswrapper[4821]: > Mar 09 18:25:08 crc kubenswrapper[4821]: E0309 18:25:08.787795 4821 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189b3f7d8798bcec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189b3f7d8798bcec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:24:46.678818028 +0000 UTC m=+23.840193924,LastTimestamp:2026-03-09 18:25:06.678656895 +0000 UTC m=+43.840032751,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:25:09 crc kubenswrapper[4821]: I0309 18:25:09.486949 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:10 crc kubenswrapper[4821]: I0309 18:25:10.487184 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:10 crc kubenswrapper[4821]: I0309 18:25:10.985904 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 09 18:25:10 crc kubenswrapper[4821]: I0309 18:25:10.986123 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:10 crc kubenswrapper[4821]: I0309 18:25:10.987666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:10 crc kubenswrapper[4821]: I0309 18:25:10.987783 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:10 crc kubenswrapper[4821]: I0309 18:25:10.987809 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:11 crc kubenswrapper[4821]: I0309 18:25:11.486962 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:12 crc kubenswrapper[4821]: I0309 18:25:12.483059 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:12 crc kubenswrapper[4821]: E0309 18:25:12.564921 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 18:25:12 crc kubenswrapper[4821]: I0309 18:25:12.565912 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:12 crc kubenswrapper[4821]: I0309 18:25:12.567161 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:12 crc kubenswrapper[4821]: I0309 18:25:12.567281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:12 crc kubenswrapper[4821]: I0309 18:25:12.567373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:12 crc kubenswrapper[4821]: I0309 18:25:12.567466 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:25:12 crc kubenswrapper[4821]: E0309 18:25:12.572120 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 09 18:25:13 crc kubenswrapper[4821]: I0309 18:25:13.484697 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:13 crc kubenswrapper[4821]: E0309 18:25:13.623430 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 09 18:25:14 crc kubenswrapper[4821]: I0309 18:25:14.483822 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:15 crc kubenswrapper[4821]: I0309 18:25:15.484013 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.367832 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.368073 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.369763 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.369837 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.369862 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.374220 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.484794 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.796481 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.797796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.797834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:16 crc kubenswrapper[4821]: I0309 18:25:16.797848 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:17 crc kubenswrapper[4821]: I0309 18:25:17.487460 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:18 crc kubenswrapper[4821]: I0309 18:25:18.483613 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:19 crc kubenswrapper[4821]: I0309 18:25:19.485724 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:19 crc kubenswrapper[4821]: E0309 18:25:19.571043 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 18:25:19 crc kubenswrapper[4821]: I0309 18:25:19.573130 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:19 crc kubenswrapper[4821]: I0309 18:25:19.574587 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:19 crc kubenswrapper[4821]: I0309 18:25:19.574622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:19 crc kubenswrapper[4821]: I0309 18:25:19.574633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:19 crc kubenswrapper[4821]: I0309 18:25:19.574653 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:25:19 crc kubenswrapper[4821]: E0309 18:25:19.579821 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 09 18:25:20 crc kubenswrapper[4821]: I0309 18:25:20.483154 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:21 crc kubenswrapper[4821]: I0309 18:25:21.486148 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:22 crc kubenswrapper[4821]: I0309 18:25:22.486120 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.485110 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.551527 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.552842 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.552986 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.553072 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.553785 4821 scope.go:117] "RemoveContainer" containerID="275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b" Mar 09 18:25:23 crc kubenswrapper[4821]: E0309 18:25:23.623690 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.820645 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.823545 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5"} Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.823833 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.825416 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.825566 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:23 crc kubenswrapper[4821]: I0309 18:25:23.825598 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.482858 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.826525 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.826875 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.828366 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" exitCode=255 Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.828415 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5"} Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.828444 4821 scope.go:117] "RemoveContainer" containerID="275cc49368dee0e778a85ae0827282b922601defe95bfa266bd9fc3e611a881b" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.828573 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.829449 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.829480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.829491 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:24 crc kubenswrapper[4821]: I0309 18:25:24.829970 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:25:24 crc kubenswrapper[4821]: E0309 18:25:24.830137 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:25 crc kubenswrapper[4821]: I0309 18:25:25.486956 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:25 crc kubenswrapper[4821]: I0309 18:25:25.831706 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 09 18:25:26 crc kubenswrapper[4821]: I0309 18:25:26.484371 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:26 crc kubenswrapper[4821]: E0309 18:25:26.578133 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 18:25:26 crc kubenswrapper[4821]: I0309 18:25:26.580494 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:26 crc kubenswrapper[4821]: I0309 18:25:26.581934 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:26 crc kubenswrapper[4821]: I0309 18:25:26.581970 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:26 crc kubenswrapper[4821]: I0309 18:25:26.581980 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:26 crc kubenswrapper[4821]: I0309 18:25:26.582008 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:25:26 crc kubenswrapper[4821]: E0309 18:25:26.588592 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.486119 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.837948 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.838169 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.839571 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.839837 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.840062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:27 crc kubenswrapper[4821]: I0309 18:25:27.841420 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:25:27 crc kubenswrapper[4821]: E0309 18:25:27.842010 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.485855 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.598780 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.840079 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.841216 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.841339 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.841437 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:28 crc kubenswrapper[4821]: I0309 18:25:28.842024 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:25:28 crc kubenswrapper[4821]: E0309 18:25:28.850273 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:29 crc kubenswrapper[4821]: I0309 18:25:29.485982 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:30 crc kubenswrapper[4821]: I0309 18:25:30.486712 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:31 crc kubenswrapper[4821]: I0309 18:25:31.483768 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:31 crc kubenswrapper[4821]: W0309 18:25:31.640563 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:31 crc kubenswrapper[4821]: E0309 18:25:31.640621 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 09 18:25:32 crc kubenswrapper[4821]: I0309 18:25:32.484401 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:32 crc kubenswrapper[4821]: I0309 18:25:32.551230 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:32 crc kubenswrapper[4821]: I0309 18:25:32.552674 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:32 crc kubenswrapper[4821]: I0309 18:25:32.552729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:32 crc kubenswrapper[4821]: I0309 18:25:32.552757 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:33 crc kubenswrapper[4821]: I0309 18:25:33.484130 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:33 crc kubenswrapper[4821]: E0309 18:25:33.585970 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 18:25:33 crc kubenswrapper[4821]: I0309 18:25:33.589125 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:33 crc kubenswrapper[4821]: I0309 18:25:33.590429 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:33 crc kubenswrapper[4821]: I0309 18:25:33.590501 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:33 crc kubenswrapper[4821]: I0309 18:25:33.590526 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:33 crc kubenswrapper[4821]: I0309 18:25:33.590590 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:25:33 crc kubenswrapper[4821]: E0309 18:25:33.597072 4821 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 09 18:25:33 crc kubenswrapper[4821]: E0309 18:25:33.624365 4821 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 09 18:25:34 crc kubenswrapper[4821]: I0309 18:25:34.482561 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:35 crc kubenswrapper[4821]: I0309 18:25:35.430275 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 18:25:35 crc kubenswrapper[4821]: I0309 18:25:35.446083 4821 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 09 18:25:35 crc kubenswrapper[4821]: I0309 18:25:35.485504 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:36 crc kubenswrapper[4821]: I0309 18:25:36.484057 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:37 crc kubenswrapper[4821]: W0309 18:25:37.394845 4821 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 09 18:25:37 crc kubenswrapper[4821]: E0309 18:25:37.394928 4821 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 09 18:25:37 crc kubenswrapper[4821]: I0309 18:25:37.486227 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:38 crc kubenswrapper[4821]: I0309 18:25:38.482168 4821 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 18:25:39 crc kubenswrapper[4821]: I0309 18:25:39.142603 4821 csr.go:261] certificate signing request csr-pj58p is approved, waiting to be issued Mar 09 18:25:39 crc kubenswrapper[4821]: I0309 18:25:39.151510 4821 csr.go:257] certificate signing request csr-pj58p is issued Mar 09 18:25:39 crc kubenswrapper[4821]: I0309 18:25:39.229871 4821 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 09 18:25:39 crc kubenswrapper[4821]: I0309 18:25:39.322011 4821 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.153190 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-26 09:19:59.065808626 +0000 UTC Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.153315 4821 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6278h54m18.912501247s for next certificate rotation Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.597863 4821 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.599208 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.599269 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.599290 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.599445 4821 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.608814 4821 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.609113 4821 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.609147 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.613005 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.613044 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.613056 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.613072 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.613085 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:40Z","lastTransitionTime":"2026-03-09T18:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.630356 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.638423 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.638469 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.638480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.638497 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.638508 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:40Z","lastTransitionTime":"2026-03-09T18:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.654571 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.662833 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.662883 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.662895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.662915 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.662927 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:40Z","lastTransitionTime":"2026-03-09T18:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.674765 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.684438 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.684504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.684523 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.684547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:40 crc kubenswrapper[4821]: I0309 18:25:40.684565 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:40Z","lastTransitionTime":"2026-03-09T18:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.696914 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.697143 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.697184 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.797474 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.898336 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:40 crc kubenswrapper[4821]: E0309 18:25:40.999388 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.099688 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.200596 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.301568 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.402659 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.503215 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.604301 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.704870 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.805820 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:41 crc kubenswrapper[4821]: E0309 18:25:41.906751 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.007499 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.107913 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.208086 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.308201 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.409133 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.510161 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.610788 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.711461 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.812497 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:42 crc kubenswrapper[4821]: E0309 18:25:42.913250 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.014246 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.115299 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.215856 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.316359 4821 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.405239 4821 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.419252 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.419305 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.419348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.419370 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.419384 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:43Z","lastTransitionTime":"2026-03-09T18:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.504927 4821 apiserver.go:52] "Watching apiserver" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.511180 4821 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.511733 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.512445 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.512564 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.512604 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.512868 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.513027 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.513130 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.513195 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.513284 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.514486 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.515746 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.517972 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.518144 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.518573 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.518577 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.518852 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.522225 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.522376 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.522912 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.525927 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.526003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.526032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.526068 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.526095 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:43Z","lastTransitionTime":"2026-03-09T18:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.568035 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.568425 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.568772 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.569114 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.587910 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.593293 4821 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.603165 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.619026 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.628672 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.628714 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.628730 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.628755 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.628772 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:43Z","lastTransitionTime":"2026-03-09T18:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.641175 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.653304 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659281 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659344 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659372 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659398 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659427 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659458 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659487 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659516 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659547 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659577 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659608 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659637 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659667 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659700 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659728 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659755 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659785 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659816 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659832 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660292 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.659845 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660792 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660820 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660842 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660862 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660881 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660899 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660919 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660937 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660957 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660975 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660994 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661012 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661029 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661049 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661094 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661115 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661135 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661156 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661175 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661195 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661214 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661233 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661251 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661269 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661290 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661311 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661347 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661372 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661391 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661409 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661427 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661445 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661465 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661486 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661504 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661523 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661542 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661580 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661598 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661617 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661639 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661656 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661673 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661691 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661729 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661747 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661765 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661787 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661806 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661826 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661864 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661887 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661929 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661948 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661965 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661985 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662005 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662027 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662048 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662066 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662085 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662105 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662124 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662222 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662245 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662266 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662284 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662303 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662340 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662359 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662377 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662396 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662417 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662437 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662482 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662504 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662559 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660278 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660416 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660490 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662648 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660564 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.660571 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661035 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661113 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661256 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661802 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.661999 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662416 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662556 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662808 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662927 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.663034 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.663975 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664218 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664233 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664444 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664498 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664566 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.662585 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664628 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664655 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664731 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664755 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664778 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664801 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664822 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664843 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664864 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664886 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664907 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664928 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664949 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664972 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664993 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665015 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665035 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665059 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665079 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665098 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665119 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665139 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665160 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665184 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665205 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665227 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665248 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665274 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665295 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665316 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665354 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665376 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665395 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665416 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665436 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665457 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665484 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665508 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665533 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665557 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665576 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665600 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665621 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665643 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665664 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665688 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665711 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665732 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665752 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665756 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665776 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665998 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666021 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666040 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666060 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666081 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666098 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666117 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666136 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666152 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666168 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666183 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666200 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666217 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666238 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666255 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666271 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666289 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666306 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666359 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666375 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666390 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666409 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666425 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666442 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666459 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666475 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666495 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666513 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666532 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666549 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666566 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666583 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666601 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666617 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666633 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666650 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666667 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666684 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666701 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666720 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666738 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666754 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666806 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666824 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666845 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666862 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666878 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666895 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666937 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666966 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667013 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667035 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667058 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667080 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667101 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667117 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667143 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667170 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667191 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667236 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667309 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667337 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667348 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667358 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667368 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667377 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667387 4821 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667398 4821 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667409 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667430 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667441 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667451 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667460 4821 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667470 4821 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667479 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667489 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667498 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667507 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667517 4821 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667526 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667536 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667545 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667555 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667564 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678478 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.664885 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665298 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665423 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665420 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665487 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665524 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665704 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.665784 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666625 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666827 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666902 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.666901 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667064 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667237 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667618 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.667642 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681757 4821 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681860 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681888 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668061 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668068 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668099 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668619 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668784 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668807 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668851 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668860 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.668867 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.669030 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:25:44.16900631 +0000 UTC m=+81.330382176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669088 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669194 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.682544 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.682644 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.682763 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.667958 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669496 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669514 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669538 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669572 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669882 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.682889 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669903 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669985 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669874 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670047 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670355 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670399 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670421 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670697 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670722 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670758 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.670784 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671268 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671366 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671348 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.682977 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:44.182955276 +0000 UTC m=+81.344331142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671383 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671541 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671632 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671826 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.671845 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672028 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672175 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672194 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672192 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672473 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672576 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672814 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672925 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.672998 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673043 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673247 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673288 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673290 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673308 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673688 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673879 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.673910 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.674079 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.674314 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.674437 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.674442 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.674627 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.674983 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675007 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675082 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675159 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675278 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675283 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675396 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675455 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.675883 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.676478 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.676652 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.676679 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.676822 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677264 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677561 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677574 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677592 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677630 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677659 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.677982 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678179 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678251 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678425 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678749 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678801 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.678848 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.679088 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.679388 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.679488 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680439 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680643 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680527 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680682 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680702 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680702 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.680973 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681172 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681186 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681537 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.681570 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.682083 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.669283 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.683489 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.683714 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.685687 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.686045 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.686203 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.687418 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:44.187400945 +0000 UTC m=+81.348776801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.687674 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.695915 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696007 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696141 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696430 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696595 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696902 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696924 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.696988 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.697273 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.697693 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.697885 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.698022 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.698153 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.698405 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.698450 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.698475 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.698562 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.698598 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:44.19856131 +0000 UTC m=+81.359937246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.698711 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.699088 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.699141 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.699851 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.699429 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.702536 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.707889 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.710377 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.710633 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.710818 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.711239 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.711262 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.711415 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.711697 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.711706 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.711752 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.711776 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.711877 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:44.211845957 +0000 UTC m=+81.373221913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.713494 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.716263 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.716683 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.717949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.719240 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.720250 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.720582 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.720721 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.720985 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721048 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721397 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721511 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721638 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721600 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721800 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.721977 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.722050 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.722112 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.722146 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.722238 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.722611 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.722748 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.730907 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.735433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.735481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.735501 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.735526 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.735542 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:43Z","lastTransitionTime":"2026-03-09T18:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.746294 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.747106 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.749905 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.752767 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.754141 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.758290 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.766791 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768364 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768516 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768546 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768717 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768771 4821 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768798 4821 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768826 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768727 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768850 4821 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768877 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768902 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768923 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768941 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768960 4821 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.768978 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769003 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769030 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769057 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769082 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769123 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769152 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769181 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769208 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769236 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769261 4821 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769286 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769312 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769376 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769403 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769429 4821 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769453 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769479 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769504 4821 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769529 4821 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769555 4821 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769581 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769616 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769639 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769656 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769674 4821 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769691 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769708 4821 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769726 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769742 4821 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769760 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769778 4821 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769795 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769813 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769832 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769852 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769868 4821 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769886 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769904 4821 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769920 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769937 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769954 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769970 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.769987 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770006 4821 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770023 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770040 4821 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770058 4821 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770075 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770091 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770108 4821 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770125 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770144 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770163 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770180 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770197 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770214 4821 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770230 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770247 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770264 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770281 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770298 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770369 4821 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770388 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770406 4821 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770423 4821 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770440 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770457 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770474 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770491 4821 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770509 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770526 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770543 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770560 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770576 4821 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770594 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770612 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770629 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770646 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770665 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770687 4821 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770711 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770736 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770760 4821 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770784 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770809 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770832 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770855 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770873 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770889 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770907 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770926 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770943 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770960 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770980 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.770998 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771016 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771034 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771052 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771068 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771084 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771101 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771119 4821 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771135 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771152 4821 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771169 4821 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771185 4821 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771203 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771219 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771236 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771253 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771270 4821 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771288 4821 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771306 4821 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771352 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771370 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771387 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771405 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771421 4821 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771439 4821 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771456 4821 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771472 4821 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771489 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771506 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771523 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771541 4821 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771558 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771576 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771594 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771611 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771629 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771645 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771662 4821 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771678 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771695 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771712 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771730 4821 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771747 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771764 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771782 4821 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771798 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771815 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771832 4821 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771847 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771864 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771881 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771897 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771913 4821 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771930 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771946 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771962 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771979 4821 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.771996 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772015 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772031 4821 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772048 4821 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772064 4821 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772081 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772100 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.772117 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.778139 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.789074 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.837716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.837760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.837769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.837786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.837795 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:43Z","lastTransitionTime":"2026-03-09T18:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.843100 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.855517 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.856730 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:43 crc kubenswrapper[4821]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 09 18:25:43 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 09 18:25:43 crc kubenswrapper[4821]: source /etc/kubernetes/apiserver-url.env Mar 09 18:25:43 crc kubenswrapper[4821]: else Mar 09 18:25:43 crc kubenswrapper[4821]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 09 18:25:43 crc kubenswrapper[4821]: exit 1 Mar 09 18:25:43 crc kubenswrapper[4821]: fi Mar 09 18:25:43 crc kubenswrapper[4821]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 09 18:25:43 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:43 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.858749 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 09 18:25:43 crc kubenswrapper[4821]: W0309 18:25:43.868863 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-ae291965934f12127241c7358402cb627975c071cae9657a1d45049e97b3ca65 WatchSource:0}: Error finding container ae291965934f12127241c7358402cb627975c071cae9657a1d45049e97b3ca65: Status 404 returned error can't find the container with id ae291965934f12127241c7358402cb627975c071cae9657a1d45049e97b3ca65 Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.871490 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:43 crc kubenswrapper[4821]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:25:43 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:25:43 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:25:43 crc kubenswrapper[4821]: set +o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: fi Mar 09 18:25:43 crc kubenswrapper[4821]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 09 18:25:43 crc kubenswrapper[4821]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 09 18:25:43 crc kubenswrapper[4821]: ho_enable="--enable-hybrid-overlay" Mar 09 18:25:43 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 09 18:25:43 crc kubenswrapper[4821]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 09 18:25:43 crc kubenswrapper[4821]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 09 18:25:43 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:25:43 crc kubenswrapper[4821]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --webhook-host=127.0.0.1 \ Mar 09 18:25:43 crc kubenswrapper[4821]: --webhook-port=9743 \ Mar 09 18:25:43 crc kubenswrapper[4821]: ${ho_enable} \ Mar 09 18:25:43 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:25:43 crc kubenswrapper[4821]: --disable-approver \ Mar 09 18:25:43 crc kubenswrapper[4821]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --wait-for-kubernetes-api=200s \ Mar 09 18:25:43 crc kubenswrapper[4821]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:25:43 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:43 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.872605 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.874303 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:43 crc kubenswrapper[4821]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:25:43 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:25:43 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:25:43 crc kubenswrapper[4821]: set +o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: fi Mar 09 18:25:43 crc kubenswrapper[4821]: Mar 09 18:25:43 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 09 18:25:43 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:25:43 crc kubenswrapper[4821]: --disable-webhook \ Mar 09 18:25:43 crc kubenswrapper[4821]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:25:43 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:43 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.875468 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.881103 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ae291965934f12127241c7358402cb627975c071cae9657a1d45049e97b3ca65"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.882578 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5a72f96e0614218518aa06ed2024a953640b6538d2ca9e54c70f31bc187570d1"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.883023 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.883155 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.883832 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:43 crc kubenswrapper[4821]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:25:43 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:25:43 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:25:43 crc kubenswrapper[4821]: set +o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: fi Mar 09 18:25:43 crc kubenswrapper[4821]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 09 18:25:43 crc kubenswrapper[4821]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 09 18:25:43 crc kubenswrapper[4821]: ho_enable="--enable-hybrid-overlay" Mar 09 18:25:43 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 09 18:25:43 crc kubenswrapper[4821]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 09 18:25:43 crc kubenswrapper[4821]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 09 18:25:43 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:25:43 crc kubenswrapper[4821]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --webhook-host=127.0.0.1 \ Mar 09 18:25:43 crc kubenswrapper[4821]: --webhook-port=9743 \ Mar 09 18:25:43 crc kubenswrapper[4821]: ${ho_enable} \ Mar 09 18:25:43 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:25:43 crc kubenswrapper[4821]: --disable-approver \ Mar 09 18:25:43 crc kubenswrapper[4821]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --wait-for-kubernetes-api=200s \ Mar 09 18:25:43 crc kubenswrapper[4821]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:25:43 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:43 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.884955 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.885362 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:43 crc kubenswrapper[4821]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 09 18:25:43 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 09 18:25:43 crc kubenswrapper[4821]: source /etc/kubernetes/apiserver-url.env Mar 09 18:25:43 crc kubenswrapper[4821]: else Mar 09 18:25:43 crc kubenswrapper[4821]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 09 18:25:43 crc kubenswrapper[4821]: exit 1 Mar 09 18:25:43 crc kubenswrapper[4821]: fi Mar 09 18:25:43 crc kubenswrapper[4821]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 09 18:25:43 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:43 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.885923 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:43 crc kubenswrapper[4821]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:25:43 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:25:43 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:25:43 crc kubenswrapper[4821]: set +o allexport Mar 09 18:25:43 crc kubenswrapper[4821]: fi Mar 09 18:25:43 crc kubenswrapper[4821]: Mar 09 18:25:43 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 09 18:25:43 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:25:43 crc kubenswrapper[4821]: --disable-webhook \ Mar 09 18:25:43 crc kubenswrapper[4821]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 09 18:25:43 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:25:43 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:43 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.886975 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.887001 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 09 18:25:43 crc kubenswrapper[4821]: E0309 18:25:43.887046 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.895735 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.903820 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.912386 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.925516 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.932774 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.940063 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.940121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.940133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.940155 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.940166 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:43Z","lastTransitionTime":"2026-03-09T18:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.942808 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.958239 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.967920 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.977864 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.987267 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:43 crc kubenswrapper[4821]: I0309 18:25:43.996882 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.008491 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.017458 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.025975 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.042754 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.042835 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.042858 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.042896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.042916 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.146371 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.146421 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.146433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.146453 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.146466 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.175954 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.176110 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:25:45.17609111 +0000 UTC m=+82.337466966 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.248816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.248885 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.248904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.248928 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.248945 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.277450 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.277512 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.277554 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.277594 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277714 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277755 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277795 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:45.277772889 +0000 UTC m=+82.439148775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277833 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:45.277811 +0000 UTC m=+82.439186886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277848 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277921 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277938 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.277851 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.278003 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:45.277983355 +0000 UTC m=+82.439359221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.278013 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.278029 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.278089 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:45.278071908 +0000 UTC m=+82.439447974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.351511 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.351589 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.351612 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.351641 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.351662 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.454295 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.454360 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.454373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.454389 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.454400 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.557624 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.557679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.557699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.557724 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.557741 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.659687 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.659793 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.659818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.659842 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.659861 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.762770 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.762836 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.762845 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.762863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.762897 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.865675 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.865746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.865770 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.865798 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.865820 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.886184 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"68110cff080cfd56371f5e250237e436b39cc40032bc5980319115e88af70a72"} Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.887890 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:25:44 crc kubenswrapper[4821]: E0309 18:25:44.888978 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.902969 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.916812 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.931920 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.946458 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.962763 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.968236 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.968295 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.968313 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.968367 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.968392 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:44Z","lastTransitionTime":"2026-03-09T18:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.976682 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:44 crc kubenswrapper[4821]: I0309 18:25:44.998751 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.072415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.072480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.072496 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.072516 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.072529 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.175416 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.175564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.175586 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.175619 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.175642 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.188085 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.188386 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:25:47.188306602 +0000 UTC m=+84.349682498 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.278398 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.278467 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.278489 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.278515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.278534 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.288941 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.289009 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.289051 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.289090 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289129 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289192 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289207 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289230 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289234 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289201 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:47.289181498 +0000 UTC m=+84.450557364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289265 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289286 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:47.289264831 +0000 UTC m=+84.450640727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289244 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289378 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:47.289365704 +0000 UTC m=+84.450741600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289286 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.289450 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:47.289436616 +0000 UTC m=+84.450812512 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.381769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.381853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.381888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.381923 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.381946 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.485427 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.485515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.485541 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.485572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.485595 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.551669 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.551755 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.551773 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.551869 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.552021 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:45 crc kubenswrapper[4821]: E0309 18:25:45.552140 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.558484 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.559738 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.561763 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.563473 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.564950 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.566054 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.567430 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.568658 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.569903 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.570967 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.571978 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.574653 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.575912 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.578301 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.579408 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.581206 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.582386 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.583141 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.585381 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.586884 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.588282 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.588781 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.588827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.588843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.588869 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.588885 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.590760 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.591682 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.593828 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.594783 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.596139 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.597678 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.598766 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.600060 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.601132 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.602178 4821 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.603549 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.607577 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.608767 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.610592 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.613923 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.615516 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.617554 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.618995 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.621197 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.622154 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.624190 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.625615 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.626894 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.627891 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.629050 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.630135 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.631726 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.632754 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.633806 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.635668 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.636915 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.638127 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.639098 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.692243 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.692374 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.692406 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.692483 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.692509 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.795173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.795243 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.795261 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.795287 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.795306 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.897350 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.897424 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.897446 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.897474 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.897496 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.999431 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.999500 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.999519 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:45 crc kubenswrapper[4821]: I0309 18:25:45.999552 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:45.999574 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:45Z","lastTransitionTime":"2026-03-09T18:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.102649 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.102702 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.102713 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.102730 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.102742 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.205433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.205531 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.205549 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.205576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.205596 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.308931 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.309590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.309630 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.309657 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.309674 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.412255 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.412361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.412386 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.412415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.412435 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.514393 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.514447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.514466 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.514488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.514504 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.617415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.617477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.617497 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.617521 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.617538 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.719776 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.719827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.719841 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.719860 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.719871 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.823816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.824182 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.824362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.824565 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.824712 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.927555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.927667 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.927696 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.927738 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:46 crc kubenswrapper[4821]: I0309 18:25:46.927769 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:46Z","lastTransitionTime":"2026-03-09T18:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.031167 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.031558 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.031773 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.031983 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.032190 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.135025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.135076 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.135095 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.135120 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.135141 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.203919 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.204237 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:25:51.204209189 +0000 UTC m=+88.365585085 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.238097 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.238131 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.238149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.238185 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.238201 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.304977 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.305074 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.305132 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.305178 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305361 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305394 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305422 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305442 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305467 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:51.305440905 +0000 UTC m=+88.466816801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305500 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:51.305482466 +0000 UTC m=+88.466858352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305535 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305554 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305632 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305655 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305698 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:51.305658331 +0000 UTC m=+88.467034247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.305764 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:51.305729593 +0000 UTC m=+88.467105489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.341823 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.341889 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.341907 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.341936 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.341956 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.445122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.445220 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.445246 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.445275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.445299 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.548304 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.548404 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.548433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.548462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.548483 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.550880 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.550897 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.551028 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.551052 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.551164 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:47 crc kubenswrapper[4821]: E0309 18:25:47.551295 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.651481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.651540 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.651557 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.651582 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.651599 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.753995 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.754057 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.754081 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.754109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.754146 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.857121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.857171 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.857187 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.857207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.857223 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.960630 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.960985 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.961166 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.961381 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:47 crc kubenswrapper[4821]: I0309 18:25:47.961582 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:47Z","lastTransitionTime":"2026-03-09T18:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.064814 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.064873 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.064890 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.064918 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.064939 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.167205 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.167265 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.167293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.167363 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.167388 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.270453 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.270516 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.270544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.270572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.270593 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.373944 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.374081 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.374099 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.374124 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.374142 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.476869 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.477014 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.477037 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.477066 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.477087 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.580488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.580622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.580655 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.580683 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.580705 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.695575 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.695654 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.695671 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.695695 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.695713 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.799002 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.799051 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.799071 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.799094 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.799112 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.901866 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.901929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.901947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.901972 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:48 crc kubenswrapper[4821]: I0309 18:25:48.901990 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:48Z","lastTransitionTime":"2026-03-09T18:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.004723 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.004779 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.004797 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.004819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.004837 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.107394 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.107474 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.107487 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.107803 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.107820 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.210849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.210916 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.210934 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.210959 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.210975 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.314109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.314174 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.314196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.314228 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.314250 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.417690 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.418078 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.418101 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.418132 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.418154 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.520790 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.520872 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.520896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.520928 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.520951 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.551155 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:49 crc kubenswrapper[4821]: E0309 18:25:49.551313 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.551166 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.551404 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:49 crc kubenswrapper[4821]: E0309 18:25:49.551518 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:49 crc kubenswrapper[4821]: E0309 18:25:49.551628 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.623535 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.623600 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.623628 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.623650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.623671 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.725850 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.725888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.725896 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.725910 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.725919 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.828363 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.828411 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.828427 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.828450 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.828469 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.930917 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.930962 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.930974 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.930990 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:49 crc kubenswrapper[4821]: I0309 18:25:49.931002 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:49Z","lastTransitionTime":"2026-03-09T18:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.033477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.033552 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.033569 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.033590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.033605 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.135631 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.135667 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.135678 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.135695 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.135707 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.238961 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.239059 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.239083 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.239115 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.239133 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.341542 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.341590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.341602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.341624 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.341636 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.443820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.443857 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.443868 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.443883 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.443893 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.546749 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.546796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.546813 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.546838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.546854 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.649050 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.649111 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.649128 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.649150 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.649169 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.752133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.752188 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.752206 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.752229 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.752246 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.855125 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.855188 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.855207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.855232 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.855248 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.958514 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.958575 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.958591 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.958616 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.958633 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.996644 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.996699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.996713 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.996745 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:50 crc kubenswrapper[4821]: I0309 18:25:50.996758 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:50Z","lastTransitionTime":"2026-03-09T18:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.011973 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.017872 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.017922 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.017935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.017954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.017966 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.032787 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.044289 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.044422 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.044451 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.044483 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.044516 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.062066 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.067027 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.067068 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.067084 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.067109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.067126 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.085736 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.090436 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.090495 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.090511 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.090533 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.090550 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.109616 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.109941 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.112598 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.112673 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.112699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.112731 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.112754 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.215989 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.216051 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.216071 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.216095 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.216113 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.244148 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.244390 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:25:59.244355914 +0000 UTC m=+96.405731810 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.318921 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.318984 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.319002 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.319025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.319042 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.345042 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.345137 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.345206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345244 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345282 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345244 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.345276 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345486 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345501 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345305 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345540 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:59.345493897 +0000 UTC m=+96.506869793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345548 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345597 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:59.345569729 +0000 UTC m=+96.506945625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345602 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345638 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:59.345616761 +0000 UTC m=+96.506992807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.345681 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:25:59.345661572 +0000 UTC m=+96.507037578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.421983 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.422050 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.422071 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.422097 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.422117 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.525382 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.525447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.525475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.525499 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.525517 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.551561 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.551642 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.551676 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.552098 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.551911 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:51 crc kubenswrapper[4821]: E0309 18:25:51.552212 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.628714 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.628807 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.628833 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.628864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.628888 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.731870 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.731942 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.731959 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.731983 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.732000 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.834373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.834413 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.834425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.834442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.834453 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.936425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.936464 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.936475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.936492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:51 crc kubenswrapper[4821]: I0309 18:25:51.936504 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:51Z","lastTransitionTime":"2026-03-09T18:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.038069 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.038098 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.038109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.038122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.038132 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.140142 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.140178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.140192 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.140207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.140218 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.243512 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.243556 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.243572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.243594 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.243610 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.346597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.346654 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.346671 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.346695 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.346711 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.449513 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.449616 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.449633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.449655 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.449672 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.552003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.552095 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.552122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.552155 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.552179 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.655597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.655741 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.655772 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.655820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.655850 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.757897 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.757941 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.757953 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.757969 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.757981 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.860481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.860533 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.860545 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.860563 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.860579 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.963576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.963629 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.963642 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.963663 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:52 crc kubenswrapper[4821]: I0309 18:25:52.963676 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:52Z","lastTransitionTime":"2026-03-09T18:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.066403 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.066447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.066456 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.066470 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.066479 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.168742 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.168802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.168819 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.168843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.168860 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.271844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.271893 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.271903 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.271915 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.271926 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.374911 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.374979 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.375003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.375032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.375054 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.477276 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.477309 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.477364 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.477379 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.477390 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.550563 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:53 crc kubenswrapper[4821]: E0309 18:25:53.551364 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.551439 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.551480 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:53 crc kubenswrapper[4821]: E0309 18:25:53.551629 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:53 crc kubenswrapper[4821]: E0309 18:25:53.551800 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.567639 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.578898 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.580097 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.580155 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.580172 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.580195 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.580213 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.590910 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.614582 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.634469 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.650362 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.663864 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.683390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.683443 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.683461 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.683489 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.683509 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.785793 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.785847 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.785863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.785884 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.785902 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.869260 4821 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.887954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.887994 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.888005 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.888022 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.888034 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.990573 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.990622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.990637 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.990660 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:53 crc kubenswrapper[4821]: I0309 18:25:53.990677 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:53Z","lastTransitionTime":"2026-03-09T18:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.093213 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.093288 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.093307 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.093363 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.093384 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.196700 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.196750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.196762 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.196780 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.196790 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.299844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.299900 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.299912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.299929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.300349 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.402758 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.402808 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.402824 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.402844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.402859 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.506116 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.506176 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.506194 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.506221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.506240 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.609602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.609683 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.609708 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.609739 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.609762 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.712890 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.712947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.712963 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.712986 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.713004 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.815731 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.815765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.815775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.815788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.815798 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.918248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.918315 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.918373 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.918401 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:54 crc kubenswrapper[4821]: I0309 18:25:54.918423 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:54Z","lastTransitionTime":"2026-03-09T18:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.020949 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.021016 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.021038 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.021060 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.021077 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.124147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.124203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.124221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.124244 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.124267 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.226175 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.226219 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.226230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.226245 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.226255 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.329472 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.329551 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.329576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.329601 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.329619 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.431849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.431900 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.431923 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.431950 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.432036 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.534849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.534920 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.534939 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.534965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.534986 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.551492 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:55 crc kubenswrapper[4821]: E0309 18:25:55.551628 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.551492 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:55 crc kubenswrapper[4821]: E0309 18:25:55.551797 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.551946 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:55 crc kubenswrapper[4821]: E0309 18:25:55.552031 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.638052 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.638090 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.638104 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.638119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.638130 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.752578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.752607 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.752622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.752635 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.752646 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.854960 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.855011 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.855034 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.855059 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.855080 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.957699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.957742 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.957754 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.957769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:55 crc kubenswrapper[4821]: I0309 18:25:55.957782 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:55Z","lastTransitionTime":"2026-03-09T18:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.060562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.060609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.060625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.060646 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.060659 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.162951 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.163588 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.163632 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.163654 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.163669 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.266037 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.266114 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.266147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.266177 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.266197 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.368811 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.368860 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.368875 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.368897 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.368914 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.471121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.471190 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.471269 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.471301 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.471391 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: E0309 18:25:56.552604 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:56 crc kubenswrapper[4821]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 09 18:25:56 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:56 crc kubenswrapper[4821]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 09 18:25:56 crc kubenswrapper[4821]: source /etc/kubernetes/apiserver-url.env Mar 09 18:25:56 crc kubenswrapper[4821]: else Mar 09 18:25:56 crc kubenswrapper[4821]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 09 18:25:56 crc kubenswrapper[4821]: exit 1 Mar 09 18:25:56 crc kubenswrapper[4821]: fi Mar 09 18:25:56 crc kubenswrapper[4821]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 09 18:25:56 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:56 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:56 crc kubenswrapper[4821]: E0309 18:25:56.553788 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.573669 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.573706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.573717 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.573732 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.573745 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.680299 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.681275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.681348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.681375 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.681394 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.784785 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.784844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.784861 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.784884 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.784901 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.887438 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.887511 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.887525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.887543 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.887579 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.990268 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.990378 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.990402 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.990429 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:56 crc kubenswrapper[4821]: I0309 18:25:56.990450 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:56Z","lastTransitionTime":"2026-03-09T18:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.093032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.093088 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.093106 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.093129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.093148 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.195575 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.195614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.195625 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.195641 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.195653 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.297875 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.297931 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.297947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.297970 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.297988 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.400986 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.401020 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.401054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.401071 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.401081 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.503236 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.503286 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.503298 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.503312 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.503360 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.550890 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.550916 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.551232 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.551392 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.551615 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.551740 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.551863 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.552120 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.554405 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:57 crc kubenswrapper[4821]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:25:57 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:25:57 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:57 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:25:57 crc kubenswrapper[4821]: set +o allexport Mar 09 18:25:57 crc kubenswrapper[4821]: fi Mar 09 18:25:57 crc kubenswrapper[4821]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 09 18:25:57 crc kubenswrapper[4821]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 09 18:25:57 crc kubenswrapper[4821]: ho_enable="--enable-hybrid-overlay" Mar 09 18:25:57 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 09 18:25:57 crc kubenswrapper[4821]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 09 18:25:57 crc kubenswrapper[4821]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 09 18:25:57 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:25:57 crc kubenswrapper[4821]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 09 18:25:57 crc kubenswrapper[4821]: --webhook-host=127.0.0.1 \ Mar 09 18:25:57 crc kubenswrapper[4821]: --webhook-port=9743 \ Mar 09 18:25:57 crc kubenswrapper[4821]: ${ho_enable} \ Mar 09 18:25:57 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:25:57 crc kubenswrapper[4821]: --disable-approver \ Mar 09 18:25:57 crc kubenswrapper[4821]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 09 18:25:57 crc kubenswrapper[4821]: --wait-for-kubernetes-api=200s \ Mar 09 18:25:57 crc kubenswrapper[4821]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 09 18:25:57 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:25:57 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:57 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.556572 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:57 crc kubenswrapper[4821]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:25:57 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:25:57 crc kubenswrapper[4821]: set -o allexport Mar 09 18:25:57 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:25:57 crc kubenswrapper[4821]: set +o allexport Mar 09 18:25:57 crc kubenswrapper[4821]: fi Mar 09 18:25:57 crc kubenswrapper[4821]: Mar 09 18:25:57 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 09 18:25:57 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:25:57 crc kubenswrapper[4821]: --disable-webhook \ Mar 09 18:25:57 crc kubenswrapper[4821]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 09 18:25:57 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:25:57 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:57 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:57 crc kubenswrapper[4821]: E0309 18:25:57.557683 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.606026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.606090 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.606109 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.606134 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.606151 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.709977 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.710086 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.710105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.710169 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.710187 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.813418 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.813462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.813478 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.813501 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.813517 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.916203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.916245 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.916260 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.916281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:57 crc kubenswrapper[4821]: I0309 18:25:57.916299 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:57Z","lastTransitionTime":"2026-03-09T18:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.019143 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.019196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.019212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.019232 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.019248 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.122423 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.122487 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.122503 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.122535 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.122552 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.225115 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.225174 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.225191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.225214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.225232 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.328548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.328595 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.328612 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.328634 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.328651 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.431556 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.431597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.431609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.431623 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.431636 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.534627 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.534701 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.534724 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.534755 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.534776 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: E0309 18:25:58.552655 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:25:58 crc kubenswrapper[4821]: E0309 18:25:58.553899 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.637449 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.637508 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.637526 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.637551 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.637574 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.740312 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.740396 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.740413 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.740442 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.740459 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.844046 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.844466 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.844672 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.844823 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.844982 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.947773 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.947851 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.947876 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.947905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:58 crc kubenswrapper[4821]: I0309 18:25:58.947929 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:58Z","lastTransitionTime":"2026-03-09T18:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.050000 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.050042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.050062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.050093 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.050115 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.153260 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.153301 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.153312 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.153344 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.153355 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.222477 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-n9tvt"] Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.222889 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.226892 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.227071 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.227194 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.245684 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.261805 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.261858 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.261870 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.261887 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.261898 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.264479 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.275929 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.288255 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.299600 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.311262 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.311791 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.311961 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:26:15.311928354 +0000 UTC m=+112.473304250 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.312101 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b53a5b8b-3dab-4300-8b7b-c3df20eab3b7-hosts-file\") pod \"node-resolver-n9tvt\" (UID: \"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\") " pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.312150 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99m5n\" (UniqueName: \"kubernetes.io/projected/b53a5b8b-3dab-4300-8b7b-c3df20eab3b7-kube-api-access-99m5n\") pod \"node-resolver-n9tvt\" (UID: \"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\") " pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.324058 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.332399 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.364895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.364969 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.364980 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.364995 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.365007 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413087 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413137 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413197 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413223 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b53a5b8b-3dab-4300-8b7b-c3df20eab3b7-hosts-file\") pod \"node-resolver-n9tvt\" (UID: \"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\") " pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413244 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99m5n\" (UniqueName: \"kubernetes.io/projected/b53a5b8b-3dab-4300-8b7b-c3df20eab3b7-kube-api-access-99m5n\") pod \"node-resolver-n9tvt\" (UID: \"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\") " pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413304 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413346 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413376 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413397 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413451 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413408 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413470 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413488 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413383 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:15.413365906 +0000 UTC m=+112.574741762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.413462 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b53a5b8b-3dab-4300-8b7b-c3df20eab3b7-hosts-file\") pod \"node-resolver-n9tvt\" (UID: \"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\") " pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413533 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:15.413522581 +0000 UTC m=+112.574898437 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413551 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:15.413543092 +0000 UTC m=+112.574918948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.413592 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:15.413560822 +0000 UTC m=+112.574936678 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.435248 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99m5n\" (UniqueName: \"kubernetes.io/projected/b53a5b8b-3dab-4300-8b7b-c3df20eab3b7-kube-api-access-99m5n\") pod \"node-resolver-n9tvt\" (UID: \"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\") " pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.467540 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.467602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.467620 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.467645 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.467663 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.542768 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n9tvt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.550863 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.550943 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.550994 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.551100 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.551196 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.551295 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:25:59 crc kubenswrapper[4821]: W0309 18:25:59.562337 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb53a5b8b_3dab_4300_8b7b_c3df20eab3b7.slice/crio-6c15988f81eb79d59216820b891a55395e86f44db90e817937b0196cadc83fa0 WatchSource:0}: Error finding container 6c15988f81eb79d59216820b891a55395e86f44db90e817937b0196cadc83fa0: Status 404 returned error can't find the container with id 6c15988f81eb79d59216820b891a55395e86f44db90e817937b0196cadc83fa0 Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.564853 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:59 crc kubenswrapper[4821]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 09 18:25:59 crc kubenswrapper[4821]: set -uo pipefail Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 09 18:25:59 crc kubenswrapper[4821]: HOSTS_FILE="/etc/hosts" Mar 09 18:25:59 crc kubenswrapper[4821]: TEMP_FILE="/etc/hosts.tmp" Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # Make a temporary file with the old hosts file's attributes. Mar 09 18:25:59 crc kubenswrapper[4821]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 09 18:25:59 crc kubenswrapper[4821]: echo "Failed to preserve hosts file. Exiting." Mar 09 18:25:59 crc kubenswrapper[4821]: exit 1 Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: while true; do Mar 09 18:25:59 crc kubenswrapper[4821]: declare -A svc_ips Mar 09 18:25:59 crc kubenswrapper[4821]: for svc in "${services[@]}"; do Mar 09 18:25:59 crc kubenswrapper[4821]: # Fetch service IP from cluster dns if present. We make several tries Mar 09 18:25:59 crc kubenswrapper[4821]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 09 18:25:59 crc kubenswrapper[4821]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 09 18:25:59 crc kubenswrapper[4821]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 09 18:25:59 crc kubenswrapper[4821]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:25:59 crc kubenswrapper[4821]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:25:59 crc kubenswrapper[4821]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:25:59 crc kubenswrapper[4821]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 09 18:25:59 crc kubenswrapper[4821]: for i in ${!cmds[*]} Mar 09 18:25:59 crc kubenswrapper[4821]: do Mar 09 18:25:59 crc kubenswrapper[4821]: ips=($(eval "${cmds[i]}")) Mar 09 18:25:59 crc kubenswrapper[4821]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 09 18:25:59 crc kubenswrapper[4821]: svc_ips["${svc}"]="${ips[@]}" Mar 09 18:25:59 crc kubenswrapper[4821]: break Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # Update /etc/hosts only if we get valid service IPs Mar 09 18:25:59 crc kubenswrapper[4821]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 09 18:25:59 crc kubenswrapper[4821]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 09 18:25:59 crc kubenswrapper[4821]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 09 18:25:59 crc kubenswrapper[4821]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 09 18:25:59 crc kubenswrapper[4821]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 09 18:25:59 crc kubenswrapper[4821]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 09 18:25:59 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:25:59 crc kubenswrapper[4821]: continue Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # Append resolver entries for services Mar 09 18:25:59 crc kubenswrapper[4821]: rc=0 Mar 09 18:25:59 crc kubenswrapper[4821]: for svc in "${!svc_ips[@]}"; do Mar 09 18:25:59 crc kubenswrapper[4821]: for ip in ${svc_ips[${svc}]}; do Mar 09 18:25:59 crc kubenswrapper[4821]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: if [[ $rc -ne 0 ]]; then Mar 09 18:25:59 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:25:59 crc kubenswrapper[4821]: continue Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 09 18:25:59 crc kubenswrapper[4821]: # Replace /etc/hosts with our modified version if needed Mar 09 18:25:59 crc kubenswrapper[4821]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 09 18:25:59 crc kubenswrapper[4821]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:25:59 crc kubenswrapper[4821]: unset svc_ips Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99m5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n9tvt_openshift-dns(b53a5b8b-3dab-4300-8b7b-c3df20eab3b7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:59 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.566008 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n9tvt" podUID="b53a5b8b-3dab-4300-8b7b-c3df20eab3b7" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.569914 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.569939 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.569947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.569960 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.569972 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.583770 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lw2hk"] Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.584022 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.586424 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.586512 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.586717 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.587549 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.588401 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.592113 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kk7gs"] Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.592389 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-b9gd4"] Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.592531 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.593397 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.596348 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.596409 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.596361 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.596706 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.596748 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.597634 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.598010 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.604125 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.613898 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615020 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-socket-dir-parent\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615046 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-system-cni-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615063 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqk4\" (UniqueName: \"kubernetes.io/projected/3270571a-a484-4e66-8035-f43509b58add-kube-api-access-6jqk4\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615080 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-cnibin\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615097 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84199f52-999d-4a44-91c7-a343ba59b10d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615115 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-cni-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615131 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84199f52-999d-4a44-91c7-a343ba59b10d-cni-binary-copy\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615146 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxjq\" (UniqueName: \"kubernetes.io/projected/84199f52-999d-4a44-91c7-a343ba59b10d-kube-api-access-4xxjq\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615163 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-etc-kubernetes\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615178 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3270571a-a484-4e66-8035-f43509b58add-proxy-tls\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615194 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-system-cni-dir\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615244 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-cni-multus\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615270 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-cnibin\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615292 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-cni-bin\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615308 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615354 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-daemon-config\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615380 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-kubelet\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615394 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3270571a-a484-4e66-8035-f43509b58add-rootfs\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615410 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-os-release\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615469 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-hostroot\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615486 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-conf-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615500 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9r74\" (UniqueName: \"kubernetes.io/projected/1a255bc9-2034-4a34-8240-f1fd42e808bd-kube-api-access-z9r74\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615541 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-multus-certs\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615556 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-k8s-cni-cncf-io\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615570 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-netns\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615585 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-os-release\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615605 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1a255bc9-2034-4a34-8240-f1fd42e808bd-cni-binary-copy\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.615620 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3270571a-a484-4e66-8035-f43509b58add-mcd-auth-proxy-config\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.622458 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.629950 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.641369 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.649960 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.657379 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.668501 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.672339 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.672368 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.672383 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.672399 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.672408 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.680299 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.688872 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.697176 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.703605 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.711911 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.716843 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84199f52-999d-4a44-91c7-a343ba59b10d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.716888 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-system-cni-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.716916 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jqk4\" (UniqueName: \"kubernetes.io/projected/3270571a-a484-4e66-8035-f43509b58add-kube-api-access-6jqk4\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.716937 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-cnibin\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.716957 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-cni-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.716978 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84199f52-999d-4a44-91c7-a343ba59b10d-cni-binary-copy\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717012 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxjq\" (UniqueName: \"kubernetes.io/projected/84199f52-999d-4a44-91c7-a343ba59b10d-kube-api-access-4xxjq\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717032 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-etc-kubernetes\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717053 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3270571a-a484-4e66-8035-f43509b58add-proxy-tls\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717085 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-system-cni-dir\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717110 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-cni-multus\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717130 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717149 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-cnibin\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-cni-bin\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-daemon-config\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717225 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3270571a-a484-4e66-8035-f43509b58add-rootfs\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717245 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-kubelet\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717264 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-conf-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717308 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9r74\" (UniqueName: \"kubernetes.io/projected/1a255bc9-2034-4a34-8240-f1fd42e808bd-kube-api-access-z9r74\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717359 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-os-release\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717379 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-hostroot\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717408 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-multus-certs\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717431 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-k8s-cni-cncf-io\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717454 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-netns\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717476 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-os-release\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717495 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1a255bc9-2034-4a34-8240-f1fd42e808bd-cni-binary-copy\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717517 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3270571a-a484-4e66-8035-f43509b58add-mcd-auth-proxy-config\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717542 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-socket-dir-parent\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717550 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/84199f52-999d-4a44-91c7-a343ba59b10d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717616 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-cni-bin\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717648 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-socket-dir-parent\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717664 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-system-cni-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717905 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-netns\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717958 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-etc-kubernetes\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718078 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-cnibin\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718090 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-cni-multus\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718195 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-multus-certs\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718161 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3270571a-a484-4e66-8035-f43509b58add-rootfs\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718128 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-conf-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718212 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-var-lib-kubelet\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718348 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-host-run-k8s-cni-cncf-io\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718483 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-daemon-config\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718490 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-multus-cni-dir\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718544 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1a255bc9-2034-4a34-8240-f1fd42e808bd-cni-binary-copy\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718692 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-os-release\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718723 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-system-cni-dir\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.717856 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-hostroot\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718901 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-os-release\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718908 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3270571a-a484-4e66-8035-f43509b58add-mcd-auth-proxy-config\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.718947 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1a255bc9-2034-4a34-8240-f1fd42e808bd-cnibin\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.719344 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/84199f52-999d-4a44-91c7-a343ba59b10d-cni-binary-copy\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.720413 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/84199f52-999d-4a44-91c7-a343ba59b10d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.722362 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3270571a-a484-4e66-8035-f43509b58add-proxy-tls\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.722894 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.733814 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jqk4\" (UniqueName: \"kubernetes.io/projected/3270571a-a484-4e66-8035-f43509b58add-kube-api-access-6jqk4\") pod \"machine-config-daemon-kk7gs\" (UID: \"3270571a-a484-4e66-8035-f43509b58add\") " pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.735620 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9r74\" (UniqueName: \"kubernetes.io/projected/1a255bc9-2034-4a34-8240-f1fd42e808bd-kube-api-access-z9r74\") pod \"multus-lw2hk\" (UID: \"1a255bc9-2034-4a34-8240-f1fd42e808bd\") " pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.736065 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.741476 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxjq\" (UniqueName: \"kubernetes.io/projected/84199f52-999d-4a44-91c7-a343ba59b10d-kube-api-access-4xxjq\") pod \"multus-additional-cni-plugins-b9gd4\" (UID: \"84199f52-999d-4a44-91c7-a343ba59b10d\") " pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.745740 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.759932 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.770192 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.774700 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.774736 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.774746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.774760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.774770 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.782498 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.792539 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.877026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.877180 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.877244 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.877304 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.877379 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.896491 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lw2hk" Mar 09 18:25:59 crc kubenswrapper[4821]: W0309 18:25:59.908234 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a255bc9_2034_4a34_8240_f1fd42e808bd.slice/crio-ae36e37d4290a45efd8bcee193900dd0273ef79be3236703c79fc709db199856 WatchSource:0}: Error finding container ae36e37d4290a45efd8bcee193900dd0273ef79be3236703c79fc709db199856: Status 404 returned error can't find the container with id ae36e37d4290a45efd8bcee193900dd0273ef79be3236703c79fc709db199856 Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.910515 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:59 crc kubenswrapper[4821]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 09 18:25:59 crc kubenswrapper[4821]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 09 18:25:59 crc kubenswrapper[4821]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9r74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lw2hk_openshift-multus(1a255bc9-2034-4a34-8240-f1fd42e808bd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:59 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.911638 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lw2hk" podUID="1a255bc9-2034-4a34-8240-f1fd42e808bd" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.912795 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.919236 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.924724 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n9tvt" event={"ID":"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7","Type":"ContainerStarted","Data":"6c15988f81eb79d59216820b891a55395e86f44db90e817937b0196cadc83fa0"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.926591 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lw2hk" event={"ID":"1a255bc9-2034-4a34-8240-f1fd42e808bd","Type":"ContainerStarted","Data":"ae36e37d4290a45efd8bcee193900dd0273ef79be3236703c79fc709db199856"} Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.931971 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:59 crc kubenswrapper[4821]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 09 18:25:59 crc kubenswrapper[4821]: set -uo pipefail Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 09 18:25:59 crc kubenswrapper[4821]: HOSTS_FILE="/etc/hosts" Mar 09 18:25:59 crc kubenswrapper[4821]: TEMP_FILE="/etc/hosts.tmp" Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # Make a temporary file with the old hosts file's attributes. Mar 09 18:25:59 crc kubenswrapper[4821]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 09 18:25:59 crc kubenswrapper[4821]: echo "Failed to preserve hosts file. Exiting." Mar 09 18:25:59 crc kubenswrapper[4821]: exit 1 Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: while true; do Mar 09 18:25:59 crc kubenswrapper[4821]: declare -A svc_ips Mar 09 18:25:59 crc kubenswrapper[4821]: for svc in "${services[@]}"; do Mar 09 18:25:59 crc kubenswrapper[4821]: # Fetch service IP from cluster dns if present. We make several tries Mar 09 18:25:59 crc kubenswrapper[4821]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 09 18:25:59 crc kubenswrapper[4821]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 09 18:25:59 crc kubenswrapper[4821]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 09 18:25:59 crc kubenswrapper[4821]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:25:59 crc kubenswrapper[4821]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:25:59 crc kubenswrapper[4821]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:25:59 crc kubenswrapper[4821]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 09 18:25:59 crc kubenswrapper[4821]: for i in ${!cmds[*]} Mar 09 18:25:59 crc kubenswrapper[4821]: do Mar 09 18:25:59 crc kubenswrapper[4821]: ips=($(eval "${cmds[i]}")) Mar 09 18:25:59 crc kubenswrapper[4821]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 09 18:25:59 crc kubenswrapper[4821]: svc_ips["${svc}"]="${ips[@]}" Mar 09 18:25:59 crc kubenswrapper[4821]: break Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # Update /etc/hosts only if we get valid service IPs Mar 09 18:25:59 crc kubenswrapper[4821]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 09 18:25:59 crc kubenswrapper[4821]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 09 18:25:59 crc kubenswrapper[4821]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 09 18:25:59 crc kubenswrapper[4821]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 09 18:25:59 crc kubenswrapper[4821]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 09 18:25:59 crc kubenswrapper[4821]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 09 18:25:59 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:25:59 crc kubenswrapper[4821]: continue Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # Append resolver entries for services Mar 09 18:25:59 crc kubenswrapper[4821]: rc=0 Mar 09 18:25:59 crc kubenswrapper[4821]: for svc in "${!svc_ips[@]}"; do Mar 09 18:25:59 crc kubenswrapper[4821]: for ip in ${svc_ips[${svc}]}; do Mar 09 18:25:59 crc kubenswrapper[4821]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: if [[ $rc -ne 0 ]]; then Mar 09 18:25:59 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:25:59 crc kubenswrapper[4821]: continue Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: Mar 09 18:25:59 crc kubenswrapper[4821]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 09 18:25:59 crc kubenswrapper[4821]: # Replace /etc/hosts with our modified version if needed Mar 09 18:25:59 crc kubenswrapper[4821]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 09 18:25:59 crc kubenswrapper[4821]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 09 18:25:59 crc kubenswrapper[4821]: fi Mar 09 18:25:59 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:25:59 crc kubenswrapper[4821]: unset svc_ips Mar 09 18:25:59 crc kubenswrapper[4821]: done Mar 09 18:25:59 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99m5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n9tvt_openshift-dns(b53a5b8b-3dab-4300-8b7b-c3df20eab3b7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:59 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.933608 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:25:59 crc kubenswrapper[4821]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 09 18:25:59 crc kubenswrapper[4821]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 09 18:25:59 crc kubenswrapper[4821]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9r74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lw2hk_openshift-multus(1a255bc9-2034-4a34-8240-f1fd42e808bd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:25:59 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.934501 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n9tvt" podUID="b53a5b8b-3dab-4300-8b7b-c3df20eab3b7" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.934742 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lw2hk" podUID="1a255bc9-2034-4a34-8240-f1fd42e808bd" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.936597 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.939142 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.941763 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.942952 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.950514 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xxjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-b9gd4_openshift-multus(84199f52-999d-4a44-91c7-a343ba59b10d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:25:59 crc kubenswrapper[4821]: E0309 18:25:59.953533 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" podUID="84199f52-999d-4a44-91c7-a343ba59b10d" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.953721 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.963495 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.963972 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfdsp"] Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.965381 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.968603 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.969175 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.969218 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.969390 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.969495 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.969563 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.969610 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.978617 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.979235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.979264 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.979272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.979285 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.979295 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:25:59Z","lastTransitionTime":"2026-03-09T18:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:25:59 crc kubenswrapper[4821]: I0309 18:25:59.992504 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.001579 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.011825 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019764 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019879 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-systemd-units\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019902 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-var-lib-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019919 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019935 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovn-node-metrics-cert\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019960 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-log-socket\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019975 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-etc-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.019987 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-node-log\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020000 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-config\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020014 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kmf\" (UniqueName: \"kubernetes.io/projected/40e368ce-5f0d-4208-a1de-67d4ab591f82-kube-api-access-c9kmf\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020029 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-slash\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020044 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020070 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-kubelet\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020084 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-script-lib\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020107 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-bin\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020120 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-netd\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020135 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-env-overrides\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020149 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020171 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-ovn\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020204 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-systemd\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.020225 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-netns\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.029005 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.036483 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.044092 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.054461 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.065992 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.074964 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.082067 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.082121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.082140 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.082163 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.082182 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.086994 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.095965 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.112827 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121569 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-etc-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121616 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-config\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121646 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9kmf\" (UniqueName: \"kubernetes.io/projected/40e368ce-5f0d-4208-a1de-67d4ab591f82-kube-api-access-c9kmf\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121680 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-node-log\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-slash\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-etc-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121756 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-script-lib\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121837 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-slash\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121803 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121848 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-node-log\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121877 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-kubelet\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121839 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-kubelet\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121917 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-bin\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121948 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-netd\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121982 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-env-overrides\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121994 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-bin\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122027 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-ovn\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.121996 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-netd\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122060 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122076 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-ovn\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122110 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-systemd\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122141 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-netns\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122170 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-systemd-units\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122207 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-systemd-units\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122203 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-systemd\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122230 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-var-lib-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122258 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-netns\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122293 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122263 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122338 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-var-lib-openvswitch\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122371 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovn-node-metrics-cert\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122401 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-log-socket\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122483 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-log-socket\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.122745 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-env-overrides\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.123491 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-config\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.123670 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-script-lib\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.126258 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovn-node-metrics-cert\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.133822 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.149927 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.150236 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9kmf\" (UniqueName: \"kubernetes.io/projected/40e368ce-5f0d-4208-a1de-67d4ab591f82-kube-api-access-c9kmf\") pod \"ovnkube-node-bfdsp\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.165535 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.180739 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.185187 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.185232 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.185248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.185270 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.185287 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.194609 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.205347 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.281954 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.287629 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.287661 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.287670 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.287684 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.287694 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: W0309 18:26:00.293191 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e368ce_5f0d_4208_a1de_67d4ab591f82.slice/crio-35ad5bac11a67a673410b088c66b16d64e5b64b55a43387b1c7814843428250f WatchSource:0}: Error finding container 35ad5bac11a67a673410b088c66b16d64e5b64b55a43387b1c7814843428250f: Status 404 returned error can't find the container with id 35ad5bac11a67a673410b088c66b16d64e5b64b55a43387b1c7814843428250f Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.295157 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:00 crc kubenswrapper[4821]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 09 18:26:00 crc kubenswrapper[4821]: apiVersion: v1 Mar 09 18:26:00 crc kubenswrapper[4821]: clusters: Mar 09 18:26:00 crc kubenswrapper[4821]: - cluster: Mar 09 18:26:00 crc kubenswrapper[4821]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 09 18:26:00 crc kubenswrapper[4821]: server: https://api-int.crc.testing:6443 Mar 09 18:26:00 crc kubenswrapper[4821]: name: default-cluster Mar 09 18:26:00 crc kubenswrapper[4821]: contexts: Mar 09 18:26:00 crc kubenswrapper[4821]: - context: Mar 09 18:26:00 crc kubenswrapper[4821]: cluster: default-cluster Mar 09 18:26:00 crc kubenswrapper[4821]: namespace: default Mar 09 18:26:00 crc kubenswrapper[4821]: user: default-auth Mar 09 18:26:00 crc kubenswrapper[4821]: name: default-context Mar 09 18:26:00 crc kubenswrapper[4821]: current-context: default-context Mar 09 18:26:00 crc kubenswrapper[4821]: kind: Config Mar 09 18:26:00 crc kubenswrapper[4821]: preferences: {} Mar 09 18:26:00 crc kubenswrapper[4821]: users: Mar 09 18:26:00 crc kubenswrapper[4821]: - name: default-auth Mar 09 18:26:00 crc kubenswrapper[4821]: user: Mar 09 18:26:00 crc kubenswrapper[4821]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:00 crc kubenswrapper[4821]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:00 crc kubenswrapper[4821]: EOF Mar 09 18:26:00 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9kmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bfdsp_openshift-ovn-kubernetes(40e368ce-5f0d-4208-a1de-67d4ab591f82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:00 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.296473 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.389729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.389769 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.389801 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.389820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.389831 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.491954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.492009 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.492025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.492047 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.492063 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.595019 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.595055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.595065 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.595079 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.595088 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.696988 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.697024 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.697033 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.697046 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.697054 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.799173 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.799211 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.799224 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.799239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.799249 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.902033 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.902066 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.902194 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.902214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.902225 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:00Z","lastTransitionTime":"2026-03-09T18:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.929909 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"403295dbf002dd232a05be291469451bc1d9415da1c61b55a63e6f1942c5512f"} Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.931095 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"35ad5bac11a67a673410b088c66b16d64e5b64b55a43387b1c7814843428250f"} Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.931799 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.932074 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:00 crc kubenswrapper[4821]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 09 18:26:00 crc kubenswrapper[4821]: apiVersion: v1 Mar 09 18:26:00 crc kubenswrapper[4821]: clusters: Mar 09 18:26:00 crc kubenswrapper[4821]: - cluster: Mar 09 18:26:00 crc kubenswrapper[4821]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 09 18:26:00 crc kubenswrapper[4821]: server: https://api-int.crc.testing:6443 Mar 09 18:26:00 crc kubenswrapper[4821]: name: default-cluster Mar 09 18:26:00 crc kubenswrapper[4821]: contexts: Mar 09 18:26:00 crc kubenswrapper[4821]: - context: Mar 09 18:26:00 crc kubenswrapper[4821]: cluster: default-cluster Mar 09 18:26:00 crc kubenswrapper[4821]: namespace: default Mar 09 18:26:00 crc kubenswrapper[4821]: user: default-auth Mar 09 18:26:00 crc kubenswrapper[4821]: name: default-context Mar 09 18:26:00 crc kubenswrapper[4821]: current-context: default-context Mar 09 18:26:00 crc kubenswrapper[4821]: kind: Config Mar 09 18:26:00 crc kubenswrapper[4821]: preferences: {} Mar 09 18:26:00 crc kubenswrapper[4821]: users: Mar 09 18:26:00 crc kubenswrapper[4821]: - name: default-auth Mar 09 18:26:00 crc kubenswrapper[4821]: user: Mar 09 18:26:00 crc kubenswrapper[4821]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:00 crc kubenswrapper[4821]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:00 crc kubenswrapper[4821]: EOF Mar 09 18:26:00 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9kmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bfdsp_openshift-ovn-kubernetes(40e368ce-5f0d-4208-a1de-67d4ab591f82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:00 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.932176 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerStarted","Data":"349df4420c07aa4d7c4b947d36b53ddd124eae9032a5445932127e8f7a124b81"} Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.933729 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.934046 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xxjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-b9gd4_openshift-multus(84199f52-999d-4a44-91c7-a343ba59b10d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.935227 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" podUID="84199f52-999d-4a44-91c7-a343ba59b10d" Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.935412 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:00 crc kubenswrapper[4821]: E0309 18:26:00.938475 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.944844 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.959809 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.969132 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.978701 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.986640 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:00 crc kubenswrapper[4821]: I0309 18:26:00.993674 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.001972 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.004875 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.004905 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.004914 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.004928 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.004937 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.009139 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.024430 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.038980 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.049174 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.057678 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.066815 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.074820 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.082142 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.089743 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.097662 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.104972 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.106719 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.106788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.106804 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.106850 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.106864 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.113154 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.121241 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.133014 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.146717 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.158299 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.169439 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.209904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.209938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.209949 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.209966 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.209979 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.313384 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.313445 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.313463 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.313519 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.313537 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.416101 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.416189 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.416200 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.416215 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.416226 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.507492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.507544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.507573 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.507588 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.507597 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.522465 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.526570 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.526610 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.526622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.526638 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.526649 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.537043 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.540679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.540727 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.540744 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.540763 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.540777 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.550724 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.550819 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.550938 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.551008 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.551121 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.551232 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.554107 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.558046 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.558107 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.558123 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.558143 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.558212 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.571705 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.576003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.576037 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.576050 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.576088 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.576103 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.585685 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:01 crc kubenswrapper[4821]: E0309 18:26:01.585789 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.590846 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.590911 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.590930 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.590952 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.590973 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.694928 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.694990 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.695007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.695147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.695168 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.797580 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.797627 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.797642 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.797661 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.797673 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.900119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.900179 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.900196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.900222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:01 crc kubenswrapper[4821]: I0309 18:26:01.900238 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:01Z","lastTransitionTime":"2026-03-09T18:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.003492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.003620 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.003653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.003687 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.003711 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.112838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.112898 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.112916 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.112943 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.112964 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.215900 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.215980 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.216000 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.216029 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.216047 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.319679 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.319816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.319838 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.319860 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.319881 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.422927 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.423055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.423088 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.423119 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.423144 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.526936 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.527014 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.527038 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.527068 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.527094 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.630488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.630590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.630611 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.630633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.630651 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.733064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.733187 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.733210 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.733236 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.733257 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.835783 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.835892 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.835912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.835978 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.835996 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.937805 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.937845 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.937853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.937868 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:02 crc kubenswrapper[4821]: I0309 18:26:02.937879 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:02Z","lastTransitionTime":"2026-03-09T18:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.040633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.040662 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.040672 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.040685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.040694 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.143834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.143900 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.143918 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.143942 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.143959 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.246643 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.246705 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.246722 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.246745 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.246763 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.350266 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.350696 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.350827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.350971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.351104 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.454802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.455081 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.455264 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.455521 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.455711 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.550907 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.551001 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:03 crc kubenswrapper[4821]: E0309 18:26:03.551111 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.551165 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:03 crc kubenswrapper[4821]: E0309 18:26:03.551500 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:03 crc kubenswrapper[4821]: E0309 18:26:03.551631 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.558032 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.558084 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.558102 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.558127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.558145 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.565501 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.579710 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.591749 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.608183 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.617277 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.637275 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.660604 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.660706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.660732 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.660765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.660792 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.661120 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.676201 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.690223 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.708785 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.720038 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.729917 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.763711 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.763762 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.763779 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.763803 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.763819 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.866454 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.866515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.866534 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.866559 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.866576 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.969285 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.969444 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.969464 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.969489 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:03 crc kubenswrapper[4821]: I0309 18:26:03.969519 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:03Z","lastTransitionTime":"2026-03-09T18:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.071889 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.071965 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.071984 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.072007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.072028 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.174835 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.174877 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.174887 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.174902 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.174913 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.277921 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.277981 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.278007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.278037 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.278058 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.380525 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.380598 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.380622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.380651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.380673 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.483668 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.483704 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.483714 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.483729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.483741 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.586760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.586816 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.586833 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.586858 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.586877 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.689256 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.689304 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.689333 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.689351 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.689365 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.796214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.796376 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.796397 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.796765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.796782 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.901163 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.901218 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.901236 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.901260 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:04 crc kubenswrapper[4821]: I0309 18:26:04.901278 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:04Z","lastTransitionTime":"2026-03-09T18:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.003547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.003597 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.003608 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.003629 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.003641 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.106948 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.107015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.107036 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.107061 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.107079 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.210486 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.210562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.210585 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.210609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.210627 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.312947 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.312997 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.313010 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.313031 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.313047 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.415262 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.415291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.415299 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.415311 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.415339 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.518222 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.518277 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.518294 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.518316 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.518356 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.550809 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.550906 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.550934 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:05 crc kubenswrapper[4821]: E0309 18:26:05.551077 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:05 crc kubenswrapper[4821]: E0309 18:26:05.551266 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:05 crc kubenswrapper[4821]: E0309 18:26:05.551522 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.620874 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.620938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.620955 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.620979 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.620998 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.724751 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.724833 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.724858 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.724892 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.724916 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.742311 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-mfdmq"] Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.742991 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.747072 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.747600 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.747917 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.749169 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.757640 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.768607 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.779166 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a085b570-506c-4b51-b0d1-4b9832e71c0f-serviceca\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.779265 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzff\" (UniqueName: \"kubernetes.io/projected/a085b570-506c-4b51-b0d1-4b9832e71c0f-kube-api-access-5mzff\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.779305 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a085b570-506c-4b51-b0d1-4b9832e71c0f-host\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.783766 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.796953 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.821222 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.833932 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.833998 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.834015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.834055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.834073 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.855304 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.867170 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.877194 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.880204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzff\" (UniqueName: \"kubernetes.io/projected/a085b570-506c-4b51-b0d1-4b9832e71c0f-kube-api-access-5mzff\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.880274 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a085b570-506c-4b51-b0d1-4b9832e71c0f-host\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.880385 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a085b570-506c-4b51-b0d1-4b9832e71c0f-serviceca\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.880543 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a085b570-506c-4b51-b0d1-4b9832e71c0f-host\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.881978 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a085b570-506c-4b51-b0d1-4b9832e71c0f-serviceca\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.889062 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.900435 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.910614 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzff\" (UniqueName: \"kubernetes.io/projected/a085b570-506c-4b51-b0d1-4b9832e71c0f-kube-api-access-5mzff\") pod \"node-ca-mfdmq\" (UID: \"a085b570-506c-4b51-b0d1-4b9832e71c0f\") " pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.911221 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.921370 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.936379 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.937503 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.937558 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.937581 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.937610 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:05 crc kubenswrapper[4821]: I0309 18:26:05.937631 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:05Z","lastTransitionTime":"2026-03-09T18:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.039904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.039956 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.039975 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.040033 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.040052 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.064882 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mfdmq" Mar 09 18:26:06 crc kubenswrapper[4821]: W0309 18:26:06.081456 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda085b570_506c_4b51_b0d1_4b9832e71c0f.slice/crio-96ae6c06153f30ace262ae08204d93b7ff5c25801c18908aecaf4820ef399a61 WatchSource:0}: Error finding container 96ae6c06153f30ace262ae08204d93b7ff5c25801c18908aecaf4820ef399a61: Status 404 returned error can't find the container with id 96ae6c06153f30ace262ae08204d93b7ff5c25801c18908aecaf4820ef399a61 Mar 09 18:26:06 crc kubenswrapper[4821]: E0309 18:26:06.084502 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:06 crc kubenswrapper[4821]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 09 18:26:06 crc kubenswrapper[4821]: while [ true ]; Mar 09 18:26:06 crc kubenswrapper[4821]: do Mar 09 18:26:06 crc kubenswrapper[4821]: for f in $(ls /tmp/serviceca); do Mar 09 18:26:06 crc kubenswrapper[4821]: echo $f Mar 09 18:26:06 crc kubenswrapper[4821]: ca_file_path="/tmp/serviceca/${f}" Mar 09 18:26:06 crc kubenswrapper[4821]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 09 18:26:06 crc kubenswrapper[4821]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 09 18:26:06 crc kubenswrapper[4821]: if [ -e "${reg_dir_path}" ]; then Mar 09 18:26:06 crc kubenswrapper[4821]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 09 18:26:06 crc kubenswrapper[4821]: else Mar 09 18:26:06 crc kubenswrapper[4821]: mkdir $reg_dir_path Mar 09 18:26:06 crc kubenswrapper[4821]: cp $ca_file_path $reg_dir_path/ca.crt Mar 09 18:26:06 crc kubenswrapper[4821]: fi Mar 09 18:26:06 crc kubenswrapper[4821]: done Mar 09 18:26:06 crc kubenswrapper[4821]: for d in $(ls /etc/docker/certs.d); do Mar 09 18:26:06 crc kubenswrapper[4821]: echo $d Mar 09 18:26:06 crc kubenswrapper[4821]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 09 18:26:06 crc kubenswrapper[4821]: reg_conf_path="/tmp/serviceca/${dp}" Mar 09 18:26:06 crc kubenswrapper[4821]: if [ ! -e "${reg_conf_path}" ]; then Mar 09 18:26:06 crc kubenswrapper[4821]: rm -rf /etc/docker/certs.d/$d Mar 09 18:26:06 crc kubenswrapper[4821]: fi Mar 09 18:26:06 crc kubenswrapper[4821]: done Mar 09 18:26:06 crc kubenswrapper[4821]: sleep 60 & wait ${!} Mar 09 18:26:06 crc kubenswrapper[4821]: done Mar 09 18:26:06 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-mfdmq_openshift-image-registry(a085b570-506c-4b51-b0d1-4b9832e71c0f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:06 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:06 crc kubenswrapper[4821]: E0309 18:26:06.085774 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-mfdmq" podUID="a085b570-506c-4b51-b0d1-4b9832e71c0f" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.143276 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.143362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.143380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.143404 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.143422 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.246818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.246886 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.246904 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.246929 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.246947 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.349638 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.349690 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.349706 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.349730 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.349747 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.452790 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.452851 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.452871 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.452897 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.452915 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.555667 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.555734 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.555756 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.555786 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.555808 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.658302 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.658411 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.658428 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.658452 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.658469 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.761646 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.761705 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.761715 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.761733 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.761745 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.864345 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.864394 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.864403 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.864432 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.864444 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.952217 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mfdmq" event={"ID":"a085b570-506c-4b51-b0d1-4b9832e71c0f","Type":"ContainerStarted","Data":"96ae6c06153f30ace262ae08204d93b7ff5c25801c18908aecaf4820ef399a61"} Mar 09 18:26:06 crc kubenswrapper[4821]: E0309 18:26:06.958538 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:06 crc kubenswrapper[4821]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 09 18:26:06 crc kubenswrapper[4821]: while [ true ]; Mar 09 18:26:06 crc kubenswrapper[4821]: do Mar 09 18:26:06 crc kubenswrapper[4821]: for f in $(ls /tmp/serviceca); do Mar 09 18:26:06 crc kubenswrapper[4821]: echo $f Mar 09 18:26:06 crc kubenswrapper[4821]: ca_file_path="/tmp/serviceca/${f}" Mar 09 18:26:06 crc kubenswrapper[4821]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 09 18:26:06 crc kubenswrapper[4821]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 09 18:26:06 crc kubenswrapper[4821]: if [ -e "${reg_dir_path}" ]; then Mar 09 18:26:06 crc kubenswrapper[4821]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 09 18:26:06 crc kubenswrapper[4821]: else Mar 09 18:26:06 crc kubenswrapper[4821]: mkdir $reg_dir_path Mar 09 18:26:06 crc kubenswrapper[4821]: cp $ca_file_path $reg_dir_path/ca.crt Mar 09 18:26:06 crc kubenswrapper[4821]: fi Mar 09 18:26:06 crc kubenswrapper[4821]: done Mar 09 18:26:06 crc kubenswrapper[4821]: for d in $(ls /etc/docker/certs.d); do Mar 09 18:26:06 crc kubenswrapper[4821]: echo $d Mar 09 18:26:06 crc kubenswrapper[4821]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 09 18:26:06 crc kubenswrapper[4821]: reg_conf_path="/tmp/serviceca/${dp}" Mar 09 18:26:06 crc kubenswrapper[4821]: if [ ! -e "${reg_conf_path}" ]; then Mar 09 18:26:06 crc kubenswrapper[4821]: rm -rf /etc/docker/certs.d/$d Mar 09 18:26:06 crc kubenswrapper[4821]: fi Mar 09 18:26:06 crc kubenswrapper[4821]: done Mar 09 18:26:06 crc kubenswrapper[4821]: sleep 60 & wait ${!} Mar 09 18:26:06 crc kubenswrapper[4821]: done Mar 09 18:26:06 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-mfdmq_openshift-image-registry(a085b570-506c-4b51-b0d1-4b9832e71c0f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:06 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:06 crc kubenswrapper[4821]: E0309 18:26:06.959979 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-mfdmq" podUID="a085b570-506c-4b51-b0d1-4b9832e71c0f" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.967273 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.967334 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.967344 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.967359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.967371 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:06Z","lastTransitionTime":"2026-03-09T18:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.972875 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:06 crc kubenswrapper[4821]: I0309 18:26:06.987432 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.003013 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.016457 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.031669 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.046367 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.057244 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.067783 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.069938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.070003 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.070026 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.070062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.070087 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.102315 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.123525 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.144725 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.166870 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.172281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.172371 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.172390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.172415 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.172433 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.178247 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.276368 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.276455 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.276475 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.276505 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.276524 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.380376 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.380444 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.380462 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.380487 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.380505 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.483271 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.483369 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.483392 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.483421 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.483440 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.551249 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.551496 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.551727 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:07 crc kubenswrapper[4821]: E0309 18:26:07.551711 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:07 crc kubenswrapper[4821]: E0309 18:26:07.551882 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:07 crc kubenswrapper[4821]: E0309 18:26:07.552100 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.586491 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.586554 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.586573 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.586600 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.586617 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.689873 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.689967 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.689981 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.689998 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.690010 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.792178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.792230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.792248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.792270 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.792288 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.894656 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.894722 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.894736 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.894756 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.894771 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.998039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.998101 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.998120 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.998143 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:07 crc kubenswrapper[4821]: I0309 18:26:07.998161 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:07Z","lastTransitionTime":"2026-03-09T18:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.100244 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.100282 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.100293 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.100312 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.100351 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.202746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.202789 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.202801 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.202820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.202832 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.306015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.306066 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.306082 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.306106 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.306124 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.409507 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.409551 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.409567 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.409590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.409608 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.511933 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.511979 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.511989 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.512007 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.512020 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.552402 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:26:08 crc kubenswrapper[4821]: E0309 18:26:08.553415 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:08 crc kubenswrapper[4821]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:08 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:08 crc kubenswrapper[4821]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 09 18:26:08 crc kubenswrapper[4821]: source /etc/kubernetes/apiserver-url.env Mar 09 18:26:08 crc kubenswrapper[4821]: else Mar 09 18:26:08 crc kubenswrapper[4821]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 09 18:26:08 crc kubenswrapper[4821]: exit 1 Mar 09 18:26:08 crc kubenswrapper[4821]: fi Mar 09 18:26:08 crc kubenswrapper[4821]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 09 18:26:08 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:08 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:08 crc kubenswrapper[4821]: E0309 18:26:08.555442 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.615031 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.615093 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.615110 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.615133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.615152 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.717350 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.717432 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.717449 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.717467 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.717480 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.820685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.820747 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.820765 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.820790 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.820811 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.924835 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.924895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.924912 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.924938 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.924955 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:08Z","lastTransitionTime":"2026-03-09T18:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.960740 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.963016 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76"} Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.963301 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.973955 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.986096 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:08 crc kubenswrapper[4821]: I0309 18:26:08.996210 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.012968 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.027069 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.027096 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.027104 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.027117 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.027127 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.027103 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.039520 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.053207 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.063204 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.072509 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.088515 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.107420 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.125725 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.132013 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.132432 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.132544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.132635 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.132898 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.153689 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.236672 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.236732 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.236750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.236775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.236792 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.340348 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.340398 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.340408 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.340428 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.340439 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.443691 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.443751 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.443768 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.443793 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.443810 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.545666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.545708 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.545718 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.545733 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.545745 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.550981 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.550997 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.551068 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:09 crc kubenswrapper[4821]: E0309 18:26:09.551181 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:09 crc kubenswrapper[4821]: E0309 18:26:09.551368 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:09 crc kubenswrapper[4821]: E0309 18:26:09.551660 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:09 crc kubenswrapper[4821]: E0309 18:26:09.552684 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:09 crc kubenswrapper[4821]: E0309 18:26:09.553900 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.648480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.648529 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.648540 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.648556 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.648566 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.750648 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.750709 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.750719 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.750733 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.750744 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.853632 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.853677 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.853687 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.853703 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.853717 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.956788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.956848 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.956857 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.956888 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:09 crc kubenswrapper[4821]: I0309 18:26:09.956898 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:09Z","lastTransitionTime":"2026-03-09T18:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.059469 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.059537 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.059559 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.059590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.059610 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.162199 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.162236 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.162245 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.162259 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.162270 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.265304 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.265397 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.265419 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.265446 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.265465 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.369002 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.369053 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.369063 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.369079 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.369089 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.471251 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.471351 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.471375 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.471403 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.471424 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: E0309 18:26:10.552678 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:10 crc kubenswrapper[4821]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 09 18:26:10 crc kubenswrapper[4821]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 09 18:26:10 crc kubenswrapper[4821]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9r74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lw2hk_openshift-multus(1a255bc9-2034-4a34-8240-f1fd42e808bd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:10 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:10 crc kubenswrapper[4821]: E0309 18:26:10.554516 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lw2hk" podUID="1a255bc9-2034-4a34-8240-f1fd42e808bd" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.574081 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.574162 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.574181 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.574209 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.574231 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.676315 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.676380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.676389 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.676404 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.676414 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.779223 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.779272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.779283 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.779303 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.779345 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.881863 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.881926 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.881945 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.881969 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.881991 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.983984 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.984024 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.984034 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.984047 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:10 crc kubenswrapper[4821]: I0309 18:26:10.984056 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:10Z","lastTransitionTime":"2026-03-09T18:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.087015 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.087074 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.087098 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.087128 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.087147 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.189439 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.189479 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.189488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.189501 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.189510 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.292080 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.292114 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.292124 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.292182 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.292195 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.394416 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.394489 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.394507 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.394532 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.394552 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.497802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.497842 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.497853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.497865 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.497875 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.545583 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54"] Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.551944 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.555426 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.555926 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.558873 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.559290 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.559371 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.559452 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.559543 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.560140 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.561761 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.561947 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:11 crc kubenswrapper[4821]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:11 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 09 18:26:11 crc kubenswrapper[4821]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 09 18:26:11 crc kubenswrapper[4821]: ho_enable="--enable-hybrid-overlay" Mar 09 18:26:11 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 09 18:26:11 crc kubenswrapper[4821]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 09 18:26:11 crc kubenswrapper[4821]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 09 18:26:11 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --webhook-host=127.0.0.1 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --webhook-port=9743 \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ho_enable} \ Mar 09 18:26:11 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:26:11 crc kubenswrapper[4821]: --disable-approver \ Mar 09 18:26:11 crc kubenswrapper[4821]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --wait-for-kubernetes-api=200s \ Mar 09 18:26:11 crc kubenswrapper[4821]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:26:11 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:11 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.567173 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:11 crc kubenswrapper[4821]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:11 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 09 18:26:11 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --disable-webhook \ Mar 09 18:26:11 crc kubenswrapper[4821]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:26:11 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:11 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.567239 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.569475 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.569490 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.573306 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.585814 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.602188 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.602246 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.602267 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.602291 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.602310 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.603765 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.631640 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.641008 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e896f92d-7d30-4f36-b892-5c8c9c792530-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.641154 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9tc7\" (UniqueName: \"kubernetes.io/projected/e896f92d-7d30-4f36-b892-5c8c9c792530-kube-api-access-n9tc7\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.641364 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e896f92d-7d30-4f36-b892-5c8c9c792530-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.641452 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e896f92d-7d30-4f36-b892-5c8c9c792530-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.646003 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.658451 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.673230 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.684046 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.695078 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.705264 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.705383 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.705403 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.705460 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.705478 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.707734 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.724528 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.736455 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.742071 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e896f92d-7d30-4f36-b892-5c8c9c792530-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.742155 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e896f92d-7d30-4f36-b892-5c8c9c792530-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.742213 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e896f92d-7d30-4f36-b892-5c8c9c792530-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.742271 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9tc7\" (UniqueName: \"kubernetes.io/projected/e896f92d-7d30-4f36-b892-5c8c9c792530-kube-api-access-n9tc7\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.743653 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e896f92d-7d30-4f36-b892-5c8c9c792530-env-overrides\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.744019 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e896f92d-7d30-4f36-b892-5c8c9c792530-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.745035 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.751915 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e896f92d-7d30-4f36-b892-5c8c9c792530-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.755537 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.761212 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9tc7\" (UniqueName: \"kubernetes.io/projected/e896f92d-7d30-4f36-b892-5c8c9c792530-kube-api-access-n9tc7\") pod \"ovnkube-control-plane-749d76644c-d6g54\" (UID: \"e896f92d-7d30-4f36-b892-5c8c9c792530\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.808961 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.809025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.809043 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.809064 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.809081 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.880042 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.902762 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:11 crc kubenswrapper[4821]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:11 crc kubenswrapper[4821]: set -euo pipefail Mar 09 18:26:11 crc kubenswrapper[4821]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 09 18:26:11 crc kubenswrapper[4821]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 09 18:26:11 crc kubenswrapper[4821]: # As the secret mount is optional we must wait for the files to be present. Mar 09 18:26:11 crc kubenswrapper[4821]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 09 18:26:11 crc kubenswrapper[4821]: TS=$(date +%s) Mar 09 18:26:11 crc kubenswrapper[4821]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 09 18:26:11 crc kubenswrapper[4821]: HAS_LOGGED_INFO=0 Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: log_missing_certs(){ Mar 09 18:26:11 crc kubenswrapper[4821]: CUR_TS=$(date +%s) Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 09 18:26:11 crc kubenswrapper[4821]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 09 18:26:11 crc kubenswrapper[4821]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 09 18:26:11 crc kubenswrapper[4821]: HAS_LOGGED_INFO=1 Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: } Mar 09 18:26:11 crc kubenswrapper[4821]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 09 18:26:11 crc kubenswrapper[4821]: log_missing_certs Mar 09 18:26:11 crc kubenswrapper[4821]: sleep 5 Mar 09 18:26:11 crc kubenswrapper[4821]: done Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 09 18:26:11 crc kubenswrapper[4821]: exec /usr/bin/kube-rbac-proxy \ Mar 09 18:26:11 crc kubenswrapper[4821]: --logtostderr \ Mar 09 18:26:11 crc kubenswrapper[4821]: --secure-listen-address=:9108 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --upstream=http://127.0.0.1:29108/ \ Mar 09 18:26:11 crc kubenswrapper[4821]: --tls-private-key-file=${TLS_PK} \ Mar 09 18:26:11 crc kubenswrapper[4821]: --tls-cert-file=${TLS_CERT} Mar 09 18:26:11 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9tc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-d6g54_openshift-ovn-kubernetes(e896f92d-7d30-4f36-b892-5c8c9c792530): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:11 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.906298 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:11 crc kubenswrapper[4821]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:11 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_join_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_join_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_transit_switch_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_transit_switch_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: dns_name_resolver_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "false" == "true" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: persistent_ips_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "true" == "true" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: # This is needed so that converting clusters from GA to TP Mar 09 18:26:11 crc kubenswrapper[4821]: # will rollout control plane pods as well Mar 09 18:26:11 crc kubenswrapper[4821]: network_segmentation_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: multi_network_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "true" == "true" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: multi_network_enabled_flag="--enable-multi-network" Mar 09 18:26:11 crc kubenswrapper[4821]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 09 18:26:11 crc kubenswrapper[4821]: exec /usr/bin/ovnkube \ Mar 09 18:26:11 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:26:11 crc kubenswrapper[4821]: --init-cluster-manager "${K8S_NODE}" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 09 18:26:11 crc kubenswrapper[4821]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --metrics-bind-address "127.0.0.1:29108" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --metrics-enable-pprof \ Mar 09 18:26:11 crc kubenswrapper[4821]: --metrics-enable-config-duration \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v4_join_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v6_join_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${dns_name_resolver_enabled_flag} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${persistent_ips_enabled_flag} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${multi_network_enabled_flag} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${network_segmentation_enabled_flag} Mar 09 18:26:11 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9tc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-d6g54_openshift-ovn-kubernetes(e896f92d-7d30-4f36-b892-5c8c9c792530): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:11 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.907480 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" podUID="e896f92d-7d30-4f36-b892-5c8c9c792530" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.911533 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.911572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.911584 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.911600 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.911613 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.971458 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" event={"ID":"e896f92d-7d30-4f36-b892-5c8c9c792530","Type":"ContainerStarted","Data":"e0b1e3837b28c6932cd197a3317fa950b41652e48b2f76b7a197b8030a147ba3"} Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.973405 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:11 crc kubenswrapper[4821]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:11 crc kubenswrapper[4821]: set -euo pipefail Mar 09 18:26:11 crc kubenswrapper[4821]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 09 18:26:11 crc kubenswrapper[4821]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 09 18:26:11 crc kubenswrapper[4821]: # As the secret mount is optional we must wait for the files to be present. Mar 09 18:26:11 crc kubenswrapper[4821]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 09 18:26:11 crc kubenswrapper[4821]: TS=$(date +%s) Mar 09 18:26:11 crc kubenswrapper[4821]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 09 18:26:11 crc kubenswrapper[4821]: HAS_LOGGED_INFO=0 Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: log_missing_certs(){ Mar 09 18:26:11 crc kubenswrapper[4821]: CUR_TS=$(date +%s) Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 09 18:26:11 crc kubenswrapper[4821]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 09 18:26:11 crc kubenswrapper[4821]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 09 18:26:11 crc kubenswrapper[4821]: HAS_LOGGED_INFO=1 Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: } Mar 09 18:26:11 crc kubenswrapper[4821]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 09 18:26:11 crc kubenswrapper[4821]: log_missing_certs Mar 09 18:26:11 crc kubenswrapper[4821]: sleep 5 Mar 09 18:26:11 crc kubenswrapper[4821]: done Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 09 18:26:11 crc kubenswrapper[4821]: exec /usr/bin/kube-rbac-proxy \ Mar 09 18:26:11 crc kubenswrapper[4821]: --logtostderr \ Mar 09 18:26:11 crc kubenswrapper[4821]: --secure-listen-address=:9108 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 09 18:26:11 crc kubenswrapper[4821]: --upstream=http://127.0.0.1:29108/ \ Mar 09 18:26:11 crc kubenswrapper[4821]: --tls-private-key-file=${TLS_PK} \ Mar 09 18:26:11 crc kubenswrapper[4821]: --tls-cert-file=${TLS_CERT} Mar 09 18:26:11 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9tc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-d6g54_openshift-ovn-kubernetes(e896f92d-7d30-4f36-b892-5c8c9c792530): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:11 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.976585 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:11 crc kubenswrapper[4821]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:11 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_join_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_join_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_transit_switch_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_transit_switch_subnet_opt= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: dns_name_resolver_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "false" == "true" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: persistent_ips_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "true" == "true" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: # This is needed so that converting clusters from GA to TP Mar 09 18:26:11 crc kubenswrapper[4821]: # will rollout control plane pods as well Mar 09 18:26:11 crc kubenswrapper[4821]: network_segmentation_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: multi_network_enabled_flag= Mar 09 18:26:11 crc kubenswrapper[4821]: if [[ "true" == "true" ]]; then Mar 09 18:26:11 crc kubenswrapper[4821]: multi_network_enabled_flag="--enable-multi-network" Mar 09 18:26:11 crc kubenswrapper[4821]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 09 18:26:11 crc kubenswrapper[4821]: fi Mar 09 18:26:11 crc kubenswrapper[4821]: Mar 09 18:26:11 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 09 18:26:11 crc kubenswrapper[4821]: exec /usr/bin/ovnkube \ Mar 09 18:26:11 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:26:11 crc kubenswrapper[4821]: --init-cluster-manager "${K8S_NODE}" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 09 18:26:11 crc kubenswrapper[4821]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --metrics-bind-address "127.0.0.1:29108" \ Mar 09 18:26:11 crc kubenswrapper[4821]: --metrics-enable-pprof \ Mar 09 18:26:11 crc kubenswrapper[4821]: --metrics-enable-config-duration \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v4_join_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v6_join_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${dns_name_resolver_enabled_flag} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${persistent_ips_enabled_flag} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${multi_network_enabled_flag} \ Mar 09 18:26:11 crc kubenswrapper[4821]: ${network_segmentation_enabled_flag} Mar 09 18:26:11 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9tc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-d6g54_openshift-ovn-kubernetes(e896f92d-7d30-4f36-b892-5c8c9c792530): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:11 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:11 crc kubenswrapper[4821]: E0309 18:26:11.977841 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" podUID="e896f92d-7d30-4f36-b892-5c8c9c792530" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.979548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.979585 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.979596 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.979613 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.979626 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:11Z","lastTransitionTime":"2026-03-09T18:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:11 crc kubenswrapper[4821]: I0309 18:26:11.992857 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.001893 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.006607 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.006673 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.006691 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.006718 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.006737 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.006977 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.020208 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.028481 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.028969 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.029004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.029016 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.029033 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.029050 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.047902 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.052954 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.052989 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.052998 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.053012 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.053022 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.053703 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.069357 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.071694 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.074397 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.074440 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.074457 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.074480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.074498 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.086779 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.089868 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.090090 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.092762 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.092831 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.092853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.092883 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.092905 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.103386 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.115361 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.125749 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.138777 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.148883 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.161684 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.171426 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.182001 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.195280 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.195395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.195412 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.195472 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.195490 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.253129 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lf7bd"] Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.253735 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.253819 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.265899 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.275472 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.289524 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.297996 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.298063 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.298086 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.298115 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.298136 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.303375 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.317161 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.331699 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.341814 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.351609 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.356935 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrdp\" (UniqueName: \"kubernetes.io/projected/9ac2c88b-a0bc-482c-90fa-165d30f045e8-kube-api-access-mdrdp\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.357044 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.361792 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.378782 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.391705 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.401479 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.401536 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.401554 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.401578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.401596 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.408690 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.434172 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.449839 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.458151 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.458276 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdrdp\" (UniqueName: \"kubernetes.io/projected/9ac2c88b-a0bc-482c-90fa-165d30f045e8-kube-api-access-mdrdp\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.458370 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.458501 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:12.958475657 +0000 UTC m=+110.119851543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.462280 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.486455 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdrdp\" (UniqueName: \"kubernetes.io/projected/9ac2c88b-a0bc-482c-90fa-165d30f045e8-kube-api-access-mdrdp\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.505359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.505413 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.505430 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.505456 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.505472 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.553459 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:12 crc kubenswrapper[4821]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 09 18:26:12 crc kubenswrapper[4821]: apiVersion: v1 Mar 09 18:26:12 crc kubenswrapper[4821]: clusters: Mar 09 18:26:12 crc kubenswrapper[4821]: - cluster: Mar 09 18:26:12 crc kubenswrapper[4821]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 09 18:26:12 crc kubenswrapper[4821]: server: https://api-int.crc.testing:6443 Mar 09 18:26:12 crc kubenswrapper[4821]: name: default-cluster Mar 09 18:26:12 crc kubenswrapper[4821]: contexts: Mar 09 18:26:12 crc kubenswrapper[4821]: - context: Mar 09 18:26:12 crc kubenswrapper[4821]: cluster: default-cluster Mar 09 18:26:12 crc kubenswrapper[4821]: namespace: default Mar 09 18:26:12 crc kubenswrapper[4821]: user: default-auth Mar 09 18:26:12 crc kubenswrapper[4821]: name: default-context Mar 09 18:26:12 crc kubenswrapper[4821]: current-context: default-context Mar 09 18:26:12 crc kubenswrapper[4821]: kind: Config Mar 09 18:26:12 crc kubenswrapper[4821]: preferences: {} Mar 09 18:26:12 crc kubenswrapper[4821]: users: Mar 09 18:26:12 crc kubenswrapper[4821]: - name: default-auth Mar 09 18:26:12 crc kubenswrapper[4821]: user: Mar 09 18:26:12 crc kubenswrapper[4821]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:12 crc kubenswrapper[4821]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:12 crc kubenswrapper[4821]: EOF Mar 09 18:26:12 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9kmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bfdsp_openshift-ovn-kubernetes(40e368ce-5f0d-4208-a1de-67d4ab591f82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:12 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.553861 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xxjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-b9gd4_openshift-multus(84199f52-999d-4a44-91c7-a343ba59b10d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.555251 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" podUID="84199f52-999d-4a44-91c7-a343ba59b10d" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.555261 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.569491 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.609054 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.609121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.609146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.609175 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.609199 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.711581 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.711635 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.711651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.711674 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.711690 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.814576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.814609 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.814621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.814638 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.814650 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.916722 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.916788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.916859 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.916887 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.916905 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:12Z","lastTransitionTime":"2026-03-09T18:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:12 crc kubenswrapper[4821]: I0309 18:26:12.964117 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.964410 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:12 crc kubenswrapper[4821]: E0309 18:26:12.964587 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:13.964567577 +0000 UTC m=+111.125943443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.019811 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.019864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.019882 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.019907 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.019924 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.122844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.123149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.123366 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.123482 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.123589 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.226687 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.226750 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.226766 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.226788 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.226804 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.329186 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.329265 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.329286 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.329310 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.329386 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.432637 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.432677 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.432707 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.432725 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.432735 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.535663 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.535741 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.535753 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.535771 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.535783 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.551404 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.551462 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.551408 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.551542 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:13 crc kubenswrapper[4821]: E0309 18:26:13.551733 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:13 crc kubenswrapper[4821]: E0309 18:26:13.551829 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:13 crc kubenswrapper[4821]: E0309 18:26:13.551908 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:13 crc kubenswrapper[4821]: E0309 18:26:13.552007 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.566982 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.594254 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.613377 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.629498 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.638055 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.638133 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.638156 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.638190 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.638420 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.648046 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.660756 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.672624 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.684385 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.694924 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.714750 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.731185 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.740666 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.740726 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.740744 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.740767 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.740783 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.742139 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.753608 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.764196 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.780191 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.795084 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.843629 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.843826 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.843909 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.843971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.844031 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.947361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.947699 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.947818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.947924 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.948036 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:13Z","lastTransitionTime":"2026-03-09T18:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:13 crc kubenswrapper[4821]: I0309 18:26:13.975502 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:13 crc kubenswrapper[4821]: E0309 18:26:13.975667 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:13 crc kubenswrapper[4821]: E0309 18:26:13.975741 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:15.975716299 +0000 UTC m=+113.137092185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.051844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.052115 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.052199 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.052305 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.052436 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.155180 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.155224 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.155235 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.155248 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.155261 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.258477 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.259275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.259482 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.259633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.259773 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.362849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.362925 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.362944 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.362971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.362991 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.466269 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.466378 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.466402 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.466431 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.466454 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: E0309 18:26:14.553763 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:14 crc kubenswrapper[4821]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:14 crc kubenswrapper[4821]: set -uo pipefail Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 09 18:26:14 crc kubenswrapper[4821]: HOSTS_FILE="/etc/hosts" Mar 09 18:26:14 crc kubenswrapper[4821]: TEMP_FILE="/etc/hosts.tmp" Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: # Make a temporary file with the old hosts file's attributes. Mar 09 18:26:14 crc kubenswrapper[4821]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 09 18:26:14 crc kubenswrapper[4821]: echo "Failed to preserve hosts file. Exiting." Mar 09 18:26:14 crc kubenswrapper[4821]: exit 1 Mar 09 18:26:14 crc kubenswrapper[4821]: fi Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: while true; do Mar 09 18:26:14 crc kubenswrapper[4821]: declare -A svc_ips Mar 09 18:26:14 crc kubenswrapper[4821]: for svc in "${services[@]}"; do Mar 09 18:26:14 crc kubenswrapper[4821]: # Fetch service IP from cluster dns if present. We make several tries Mar 09 18:26:14 crc kubenswrapper[4821]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 09 18:26:14 crc kubenswrapper[4821]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 09 18:26:14 crc kubenswrapper[4821]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 09 18:26:14 crc kubenswrapper[4821]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:26:14 crc kubenswrapper[4821]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:26:14 crc kubenswrapper[4821]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:26:14 crc kubenswrapper[4821]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 09 18:26:14 crc kubenswrapper[4821]: for i in ${!cmds[*]} Mar 09 18:26:14 crc kubenswrapper[4821]: do Mar 09 18:26:14 crc kubenswrapper[4821]: ips=($(eval "${cmds[i]}")) Mar 09 18:26:14 crc kubenswrapper[4821]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 09 18:26:14 crc kubenswrapper[4821]: svc_ips["${svc}"]="${ips[@]}" Mar 09 18:26:14 crc kubenswrapper[4821]: break Mar 09 18:26:14 crc kubenswrapper[4821]: fi Mar 09 18:26:14 crc kubenswrapper[4821]: done Mar 09 18:26:14 crc kubenswrapper[4821]: done Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: # Update /etc/hosts only if we get valid service IPs Mar 09 18:26:14 crc kubenswrapper[4821]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 09 18:26:14 crc kubenswrapper[4821]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 09 18:26:14 crc kubenswrapper[4821]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 09 18:26:14 crc kubenswrapper[4821]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 09 18:26:14 crc kubenswrapper[4821]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 09 18:26:14 crc kubenswrapper[4821]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 09 18:26:14 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:26:14 crc kubenswrapper[4821]: continue Mar 09 18:26:14 crc kubenswrapper[4821]: fi Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: # Append resolver entries for services Mar 09 18:26:14 crc kubenswrapper[4821]: rc=0 Mar 09 18:26:14 crc kubenswrapper[4821]: for svc in "${!svc_ips[@]}"; do Mar 09 18:26:14 crc kubenswrapper[4821]: for ip in ${svc_ips[${svc}]}; do Mar 09 18:26:14 crc kubenswrapper[4821]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 09 18:26:14 crc kubenswrapper[4821]: done Mar 09 18:26:14 crc kubenswrapper[4821]: done Mar 09 18:26:14 crc kubenswrapper[4821]: if [[ $rc -ne 0 ]]; then Mar 09 18:26:14 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:26:14 crc kubenswrapper[4821]: continue Mar 09 18:26:14 crc kubenswrapper[4821]: fi Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: Mar 09 18:26:14 crc kubenswrapper[4821]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 09 18:26:14 crc kubenswrapper[4821]: # Replace /etc/hosts with our modified version if needed Mar 09 18:26:14 crc kubenswrapper[4821]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 09 18:26:14 crc kubenswrapper[4821]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 09 18:26:14 crc kubenswrapper[4821]: fi Mar 09 18:26:14 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:26:14 crc kubenswrapper[4821]: unset svc_ips Mar 09 18:26:14 crc kubenswrapper[4821]: done Mar 09 18:26:14 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99m5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n9tvt_openshift-dns(b53a5b8b-3dab-4300-8b7b-c3df20eab3b7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:14 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:14 crc kubenswrapper[4821]: E0309 18:26:14.555017 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n9tvt" podUID="b53a5b8b-3dab-4300-8b7b-c3df20eab3b7" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.569148 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.569194 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.569207 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.569229 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.569242 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.672663 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.672935 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.673081 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.673229 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.673388 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.776430 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.776492 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.776510 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.776535 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.776555 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.878775 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.878821 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.878834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.878852 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.878865 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.981311 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.981388 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.981405 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.981427 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:14 crc kubenswrapper[4821]: I0309 18:26:14.981441 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:14Z","lastTransitionTime":"2026-03-09T18:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.084249 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.084651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.084883 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.085079 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.085230 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.188080 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.188622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.188855 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.189051 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.189272 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.292217 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.292283 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.292304 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.292390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.292417 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.391860 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.392125 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:26:47.392088966 +0000 UTC m=+144.553464862 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.395839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.395894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.395911 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.395936 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.395953 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.493845 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.493978 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494041 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494077 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494102 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494174 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:47.494151006 +0000 UTC m=+144.655526902 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494177 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494203 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494217 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494254 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:47.494241619 +0000 UTC m=+144.655617505 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.494047 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494271 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.494356 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494402 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:47.494377573 +0000 UTC m=+144.655753459 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494452 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.494497 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:47.494484726 +0000 UTC m=+144.655860612 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.498639 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.498820 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.498843 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.498865 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.498882 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.551669 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.551707 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.551839 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.551859 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.551883 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.552024 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.552123 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:15 crc kubenswrapper[4821]: E0309 18:26:15.552203 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.601976 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.602041 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.602059 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.602080 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.602098 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.706029 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.706099 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.706116 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.706142 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.706162 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.808927 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.809485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.809590 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.809685 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.809771 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.912796 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.912853 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.912871 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.912898 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:15 crc kubenswrapper[4821]: I0309 18:26:15.912917 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:15Z","lastTransitionTime":"2026-03-09T18:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.000620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:16 crc kubenswrapper[4821]: E0309 18:26:16.000855 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:16 crc kubenswrapper[4821]: E0309 18:26:16.000943 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:20.000922227 +0000 UTC m=+117.162298083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.016380 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.016461 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.016483 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.016513 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.016537 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.119684 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.119789 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.119812 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.119846 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.119868 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.223185 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.223281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.223302 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.223372 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.223392 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.326282 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.326450 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.326482 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.326514 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.326537 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.428977 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.429030 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.429048 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.429074 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.429094 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.531870 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.531951 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.531971 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.532001 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.532025 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.634807 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.634844 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.634854 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.634867 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.634876 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.737395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.737456 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.737473 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.737498 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.737514 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.840090 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.840150 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.840174 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.840203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.840220 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.944062 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.944110 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.944126 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.944146 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:16 crc kubenswrapper[4821]: I0309 18:26:16.944160 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:16Z","lastTransitionTime":"2026-03-09T18:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.046358 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.046431 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.046457 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.046488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.046513 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.148692 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.148952 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.149039 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.149127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.149201 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.252481 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.252542 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.252560 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.252639 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.252658 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.355214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.355265 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.355284 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.355306 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.355357 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.457723 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.458143 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.458310 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.458527 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.458668 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.551155 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.551307 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.551305 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:17 crc kubenswrapper[4821]: E0309 18:26:17.551930 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.551565 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:17 crc kubenswrapper[4821]: E0309 18:26:17.551996 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:17 crc kubenswrapper[4821]: E0309 18:26:17.552063 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:17 crc kubenswrapper[4821]: E0309 18:26:17.551765 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.561059 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.561096 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.561108 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.561127 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.561143 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.565538 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.663914 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.664202 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.664268 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.664379 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.664471 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.766450 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.766504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.766523 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.766545 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.766562 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.869439 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.869496 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.869521 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.869545 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.869562 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.976281 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.976390 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.976418 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.976465 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:17 crc kubenswrapper[4821]: I0309 18:26:17.976496 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:17Z","lastTransitionTime":"2026-03-09T18:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.079961 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.080025 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.080041 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.080065 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.080083 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.183834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.183902 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.183920 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.183945 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.183964 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.287359 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.287480 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.287499 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.287523 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.287540 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.390010 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.390078 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.390097 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.390121 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.390138 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.493580 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.493633 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.493651 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.493676 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.493694 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.597244 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.597305 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.597351 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.597378 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.597396 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.605416 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.623271 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.639654 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.655852 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.670070 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.685769 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.701895 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.701932 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.701943 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.701961 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.701973 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.713768 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.732097 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.748548 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.759274 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.772030 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.786708 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.801602 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.806037 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.806129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.806147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.806203 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.806221 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.813383 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.840827 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.859206 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.872090 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.891462 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.909408 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.909614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.909746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.909865 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:18 crc kubenswrapper[4821]: I0309 18:26:18.910104 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:18Z","lastTransitionTime":"2026-03-09T18:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.013124 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.013225 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.013245 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.013296 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.013314 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.116823 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.116885 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.116902 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.116925 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.116942 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.220425 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.220515 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.220544 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.220579 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.220610 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.293649 4821 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.324455 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.324754 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.324957 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.325114 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.325257 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.428504 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.429547 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.429587 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.429632 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.429672 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.533212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.533282 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.533305 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.533381 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.533412 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.550710 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:19 crc kubenswrapper[4821]: E0309 18:26:19.550818 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.550879 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.550902 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.550912 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:19 crc kubenswrapper[4821]: E0309 18:26:19.551044 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:19 crc kubenswrapper[4821]: E0309 18:26:19.551162 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:19 crc kubenswrapper[4821]: E0309 18:26:19.551215 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.635488 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.635553 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.635574 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.635602 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.635623 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.738746 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.738800 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.738817 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.738840 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.738857 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.841624 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.841680 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.841696 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.841719 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.841736 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.945147 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.945212 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.945230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.945254 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:19 crc kubenswrapper[4821]: I0309 18:26:19.945271 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:19Z","lastTransitionTime":"2026-03-09T18:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.047622 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.047701 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.047726 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.047753 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.047771 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.050456 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:20 crc kubenswrapper[4821]: E0309 18:26:20.050590 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:20 crc kubenswrapper[4821]: E0309 18:26:20.050640 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:28.050627561 +0000 UTC m=+125.212003407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.151516 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.151562 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.151574 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.151594 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.151607 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.254599 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.254675 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.254694 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.254722 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.254742 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.357031 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.357095 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.357112 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.357138 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.357156 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.460876 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.460955 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.460978 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.461004 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.461021 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.564275 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.564458 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.564478 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.564502 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.564519 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.667250 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.667365 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.667393 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.667421 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.667438 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.770377 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.770453 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.770470 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.771083 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.771219 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.875048 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.875112 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.875129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.875153 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.875173 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.978510 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.978564 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.978581 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.978604 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:20 crc kubenswrapper[4821]: I0309 18:26:20.978620 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:20Z","lastTransitionTime":"2026-03-09T18:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.081849 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.081939 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.081962 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.081992 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.082014 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.185129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.185198 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.185215 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.185238 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.185254 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.287760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.287839 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.287864 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.287894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.287915 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.391631 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.391698 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.391723 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.391749 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.391770 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.494377 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.495257 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.495553 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.495781 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.495997 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.550962 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.551128 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.551354 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.551771 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.551639 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.551559 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.552573 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.552757 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.554026 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.555078 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:21 crc kubenswrapper[4821]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 09 18:26:21 crc kubenswrapper[4821]: while [ true ]; Mar 09 18:26:21 crc kubenswrapper[4821]: do Mar 09 18:26:21 crc kubenswrapper[4821]: for f in $(ls /tmp/serviceca); do Mar 09 18:26:21 crc kubenswrapper[4821]: echo $f Mar 09 18:26:21 crc kubenswrapper[4821]: ca_file_path="/tmp/serviceca/${f}" Mar 09 18:26:21 crc kubenswrapper[4821]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 09 18:26:21 crc kubenswrapper[4821]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 09 18:26:21 crc kubenswrapper[4821]: if [ -e "${reg_dir_path}" ]; then Mar 09 18:26:21 crc kubenswrapper[4821]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 09 18:26:21 crc kubenswrapper[4821]: else Mar 09 18:26:21 crc kubenswrapper[4821]: mkdir $reg_dir_path Mar 09 18:26:21 crc kubenswrapper[4821]: cp $ca_file_path $reg_dir_path/ca.crt Mar 09 18:26:21 crc kubenswrapper[4821]: fi Mar 09 18:26:21 crc kubenswrapper[4821]: done Mar 09 18:26:21 crc kubenswrapper[4821]: for d in $(ls /etc/docker/certs.d); do Mar 09 18:26:21 crc kubenswrapper[4821]: echo $d Mar 09 18:26:21 crc kubenswrapper[4821]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 09 18:26:21 crc kubenswrapper[4821]: reg_conf_path="/tmp/serviceca/${dp}" Mar 09 18:26:21 crc kubenswrapper[4821]: if [ ! -e "${reg_conf_path}" ]; then Mar 09 18:26:21 crc kubenswrapper[4821]: rm -rf /etc/docker/certs.d/$d Mar 09 18:26:21 crc kubenswrapper[4821]: fi Mar 09 18:26:21 crc kubenswrapper[4821]: done Mar 09 18:26:21 crc kubenswrapper[4821]: sleep 60 & wait ${!} Mar 09 18:26:21 crc kubenswrapper[4821]: done Mar 09 18:26:21 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mzff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-mfdmq_openshift-image-registry(a085b570-506c-4b51-b0d1-4b9832e71c0f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:21 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.555155 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 09 18:26:21 crc kubenswrapper[4821]: E0309 18:26:21.556178 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-mfdmq" podUID="a085b570-506c-4b51-b0d1-4b9832e71c0f" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.599690 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.599779 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.599802 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.599834 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.599861 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.703230 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.703300 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.703362 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.703395 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.703415 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.806123 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.806196 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.806214 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.806239 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.806256 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.909698 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.909964 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.910103 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.910253 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:21 crc kubenswrapper[4821]: I0309 18:26:21.910467 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:21Z","lastTransitionTime":"2026-03-09T18:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.012974 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.013042 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.013191 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.013221 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.013239 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.115822 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.116206 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.116459 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.116716 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.116877 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.219576 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.219932 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.220085 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.220237 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.220408 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.324361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.324420 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.324445 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.324476 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.324501 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.427555 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.427621 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.427642 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.427669 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.427691 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.457392 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.457634 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.457801 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.457955 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.458103 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.473520 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.478578 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.478726 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.478818 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.478919 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.479011 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.492720 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.497485 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.497551 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.497570 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.497594 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.497614 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.510212 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.515178 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.515470 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.515619 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.515760 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.515879 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.531761 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.536665 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.536871 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.537018 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.537283 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.537504 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.548002 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.548459 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.550079 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.550122 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.550139 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.550163 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.550182 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.552494 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:22 crc kubenswrapper[4821]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 09 18:26:22 crc kubenswrapper[4821]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 09 18:26:22 crc kubenswrapper[4821]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9r74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-lw2hk_openshift-multus(1a255bc9-2034-4a34-8240-f1fd42e808bd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:22 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:22 crc kubenswrapper[4821]: E0309 18:26:22.554064 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-lw2hk" podUID="1a255bc9-2034-4a34-8240-f1fd42e808bd" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.651995 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.652365 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.652502 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.652668 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.652841 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.755494 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.755806 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.756010 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.756240 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.756501 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.859761 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.859828 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.859845 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.859872 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.859896 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.962653 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.962997 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.963156 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.963400 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:22 crc kubenswrapper[4821]: I0309 18:26:22.963620 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:22Z","lastTransitionTime":"2026-03-09T18:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.067225 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.067559 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.067967 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.068183 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.068469 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:23Z","lastTransitionTime":"2026-03-09T18:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.171523 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.171589 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.171606 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.171650 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.171668 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:23Z","lastTransitionTime":"2026-03-09T18:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.274447 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.274503 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.274514 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.274532 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.274545 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:23Z","lastTransitionTime":"2026-03-09T18:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.377476 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.377528 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.377548 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.377572 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.377588 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:23Z","lastTransitionTime":"2026-03-09T18:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.480459 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.480892 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.481274 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.481696 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.482010 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:23Z","lastTransitionTime":"2026-03-09T18:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.578426 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.578480 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.578558 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.578707 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.578765 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.578781 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.579035 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.579212 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.580367 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:23 crc kubenswrapper[4821]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:23 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:23 crc kubenswrapper[4821]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 09 18:26:23 crc kubenswrapper[4821]: source /etc/kubernetes/apiserver-url.env Mar 09 18:26:23 crc kubenswrapper[4821]: else Mar 09 18:26:23 crc kubenswrapper[4821]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 09 18:26:23 crc kubenswrapper[4821]: exit 1 Mar 09 18:26:23 crc kubenswrapper[4821]: fi Mar 09 18:26:23 crc kubenswrapper[4821]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 09 18:26:23 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:23 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.581533 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.582934 4821 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.600221 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.632683 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: E0309 18:26:23.645924 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.651609 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.666370 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.690199 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.701651 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.709177 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.718467 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.727809 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.736963 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.751853 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.767199 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.791102 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.805011 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.816159 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.831098 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:23 crc kubenswrapper[4821]: I0309 18:26:23.842380 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:24 crc kubenswrapper[4821]: E0309 18:26:24.555241 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:24 crc kubenswrapper[4821]: E0309 18:26:24.555412 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:24 crc kubenswrapper[4821]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:24 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:24 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:24 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:24 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:24 crc kubenswrapper[4821]: fi Mar 09 18:26:24 crc kubenswrapper[4821]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 09 18:26:24 crc kubenswrapper[4821]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 09 18:26:24 crc kubenswrapper[4821]: ho_enable="--enable-hybrid-overlay" Mar 09 18:26:24 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 09 18:26:24 crc kubenswrapper[4821]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 09 18:26:24 crc kubenswrapper[4821]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 09 18:26:24 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:26:24 crc kubenswrapper[4821]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 09 18:26:24 crc kubenswrapper[4821]: --webhook-host=127.0.0.1 \ Mar 09 18:26:24 crc kubenswrapper[4821]: --webhook-port=9743 \ Mar 09 18:26:24 crc kubenswrapper[4821]: ${ho_enable} \ Mar 09 18:26:24 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:26:24 crc kubenswrapper[4821]: --disable-approver \ Mar 09 18:26:24 crc kubenswrapper[4821]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 09 18:26:24 crc kubenswrapper[4821]: --wait-for-kubernetes-api=200s \ Mar 09 18:26:24 crc kubenswrapper[4821]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 09 18:26:24 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:26:24 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:24 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:24 crc kubenswrapper[4821]: E0309 18:26:24.557704 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:24 crc kubenswrapper[4821]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:24 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:24 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:24 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:24 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:24 crc kubenswrapper[4821]: fi Mar 09 18:26:24 crc kubenswrapper[4821]: Mar 09 18:26:24 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 09 18:26:24 crc kubenswrapper[4821]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 09 18:26:24 crc kubenswrapper[4821]: --disable-webhook \ Mar 09 18:26:24 crc kubenswrapper[4821]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 09 18:26:24 crc kubenswrapper[4821]: --loglevel="${LOGLEVEL}" Mar 09 18:26:24 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:24 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:24 crc kubenswrapper[4821]: E0309 18:26:24.557700 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jqk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:24 crc kubenswrapper[4821]: E0309 18:26:24.559784 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 09 18:26:24 crc kubenswrapper[4821]: E0309 18:26:24.560023 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:26:25 crc kubenswrapper[4821]: I0309 18:26:25.551224 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:25 crc kubenswrapper[4821]: I0309 18:26:25.551245 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.551468 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:25 crc kubenswrapper[4821]: I0309 18:26:25.551514 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:25 crc kubenswrapper[4821]: I0309 18:26:25.551156 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.552533 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.552673 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.552817 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.554472 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:25 crc kubenswrapper[4821]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 09 18:26:25 crc kubenswrapper[4821]: apiVersion: v1 Mar 09 18:26:25 crc kubenswrapper[4821]: clusters: Mar 09 18:26:25 crc kubenswrapper[4821]: - cluster: Mar 09 18:26:25 crc kubenswrapper[4821]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 09 18:26:25 crc kubenswrapper[4821]: server: https://api-int.crc.testing:6443 Mar 09 18:26:25 crc kubenswrapper[4821]: name: default-cluster Mar 09 18:26:25 crc kubenswrapper[4821]: contexts: Mar 09 18:26:25 crc kubenswrapper[4821]: - context: Mar 09 18:26:25 crc kubenswrapper[4821]: cluster: default-cluster Mar 09 18:26:25 crc kubenswrapper[4821]: namespace: default Mar 09 18:26:25 crc kubenswrapper[4821]: user: default-auth Mar 09 18:26:25 crc kubenswrapper[4821]: name: default-context Mar 09 18:26:25 crc kubenswrapper[4821]: current-context: default-context Mar 09 18:26:25 crc kubenswrapper[4821]: kind: Config Mar 09 18:26:25 crc kubenswrapper[4821]: preferences: {} Mar 09 18:26:25 crc kubenswrapper[4821]: users: Mar 09 18:26:25 crc kubenswrapper[4821]: - name: default-auth Mar 09 18:26:25 crc kubenswrapper[4821]: user: Mar 09 18:26:25 crc kubenswrapper[4821]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:25 crc kubenswrapper[4821]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 09 18:26:25 crc kubenswrapper[4821]: EOF Mar 09 18:26:25 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9kmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bfdsp_openshift-ovn-kubernetes(40e368ce-5f0d-4208-a1de-67d4ab591f82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:25 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.554469 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:25 crc kubenswrapper[4821]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:25 crc kubenswrapper[4821]: set -uo pipefail Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 09 18:26:25 crc kubenswrapper[4821]: HOSTS_FILE="/etc/hosts" Mar 09 18:26:25 crc kubenswrapper[4821]: TEMP_FILE="/etc/hosts.tmp" Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: # Make a temporary file with the old hosts file's attributes. Mar 09 18:26:25 crc kubenswrapper[4821]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 09 18:26:25 crc kubenswrapper[4821]: echo "Failed to preserve hosts file. Exiting." Mar 09 18:26:25 crc kubenswrapper[4821]: exit 1 Mar 09 18:26:25 crc kubenswrapper[4821]: fi Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: while true; do Mar 09 18:26:25 crc kubenswrapper[4821]: declare -A svc_ips Mar 09 18:26:25 crc kubenswrapper[4821]: for svc in "${services[@]}"; do Mar 09 18:26:25 crc kubenswrapper[4821]: # Fetch service IP from cluster dns if present. We make several tries Mar 09 18:26:25 crc kubenswrapper[4821]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 09 18:26:25 crc kubenswrapper[4821]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 09 18:26:25 crc kubenswrapper[4821]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 09 18:26:25 crc kubenswrapper[4821]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:26:25 crc kubenswrapper[4821]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:26:25 crc kubenswrapper[4821]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 09 18:26:25 crc kubenswrapper[4821]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 09 18:26:25 crc kubenswrapper[4821]: for i in ${!cmds[*]} Mar 09 18:26:25 crc kubenswrapper[4821]: do Mar 09 18:26:25 crc kubenswrapper[4821]: ips=($(eval "${cmds[i]}")) Mar 09 18:26:25 crc kubenswrapper[4821]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 09 18:26:25 crc kubenswrapper[4821]: svc_ips["${svc}"]="${ips[@]}" Mar 09 18:26:25 crc kubenswrapper[4821]: break Mar 09 18:26:25 crc kubenswrapper[4821]: fi Mar 09 18:26:25 crc kubenswrapper[4821]: done Mar 09 18:26:25 crc kubenswrapper[4821]: done Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: # Update /etc/hosts only if we get valid service IPs Mar 09 18:26:25 crc kubenswrapper[4821]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 09 18:26:25 crc kubenswrapper[4821]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 09 18:26:25 crc kubenswrapper[4821]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 09 18:26:25 crc kubenswrapper[4821]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 09 18:26:25 crc kubenswrapper[4821]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 09 18:26:25 crc kubenswrapper[4821]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 09 18:26:25 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:26:25 crc kubenswrapper[4821]: continue Mar 09 18:26:25 crc kubenswrapper[4821]: fi Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: # Append resolver entries for services Mar 09 18:26:25 crc kubenswrapper[4821]: rc=0 Mar 09 18:26:25 crc kubenswrapper[4821]: for svc in "${!svc_ips[@]}"; do Mar 09 18:26:25 crc kubenswrapper[4821]: for ip in ${svc_ips[${svc}]}; do Mar 09 18:26:25 crc kubenswrapper[4821]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 09 18:26:25 crc kubenswrapper[4821]: done Mar 09 18:26:25 crc kubenswrapper[4821]: done Mar 09 18:26:25 crc kubenswrapper[4821]: if [[ $rc -ne 0 ]]; then Mar 09 18:26:25 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:26:25 crc kubenswrapper[4821]: continue Mar 09 18:26:25 crc kubenswrapper[4821]: fi Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: Mar 09 18:26:25 crc kubenswrapper[4821]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 09 18:26:25 crc kubenswrapper[4821]: # Replace /etc/hosts with our modified version if needed Mar 09 18:26:25 crc kubenswrapper[4821]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 09 18:26:25 crc kubenswrapper[4821]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 09 18:26:25 crc kubenswrapper[4821]: fi Mar 09 18:26:25 crc kubenswrapper[4821]: sleep 60 & wait Mar 09 18:26:25 crc kubenswrapper[4821]: unset svc_ips Mar 09 18:26:25 crc kubenswrapper[4821]: done Mar 09 18:26:25 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99m5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-n9tvt_openshift-dns(b53a5b8b-3dab-4300-8b7b-c3df20eab3b7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:25 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.555548 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" Mar 09 18:26:25 crc kubenswrapper[4821]: E0309 18:26:25.555581 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-n9tvt" podUID="b53a5b8b-3dab-4300-8b7b-c3df20eab3b7" Mar 09 18:26:27 crc kubenswrapper[4821]: I0309 18:26:27.551631 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:27 crc kubenswrapper[4821]: I0309 18:26:27.552182 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:27 crc kubenswrapper[4821]: I0309 18:26:27.552205 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:27 crc kubenswrapper[4821]: I0309 18:26:27.552258 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.552678 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.552918 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.553036 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.553191 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.554607 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xxjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-b9gd4_openshift-multus(84199f52-999d-4a44-91c7-a343ba59b10d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.554666 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:27 crc kubenswrapper[4821]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Mar 09 18:26:27 crc kubenswrapper[4821]: set -euo pipefail Mar 09 18:26:27 crc kubenswrapper[4821]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 09 18:26:27 crc kubenswrapper[4821]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 09 18:26:27 crc kubenswrapper[4821]: # As the secret mount is optional we must wait for the files to be present. Mar 09 18:26:27 crc kubenswrapper[4821]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 09 18:26:27 crc kubenswrapper[4821]: TS=$(date +%s) Mar 09 18:26:27 crc kubenswrapper[4821]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 09 18:26:27 crc kubenswrapper[4821]: HAS_LOGGED_INFO=0 Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: log_missing_certs(){ Mar 09 18:26:27 crc kubenswrapper[4821]: CUR_TS=$(date +%s) Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 09 18:26:27 crc kubenswrapper[4821]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 09 18:26:27 crc kubenswrapper[4821]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 09 18:26:27 crc kubenswrapper[4821]: HAS_LOGGED_INFO=1 Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: } Mar 09 18:26:27 crc kubenswrapper[4821]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 09 18:26:27 crc kubenswrapper[4821]: log_missing_certs Mar 09 18:26:27 crc kubenswrapper[4821]: sleep 5 Mar 09 18:26:27 crc kubenswrapper[4821]: done Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 09 18:26:27 crc kubenswrapper[4821]: exec /usr/bin/kube-rbac-proxy \ Mar 09 18:26:27 crc kubenswrapper[4821]: --logtostderr \ Mar 09 18:26:27 crc kubenswrapper[4821]: --secure-listen-address=:9108 \ Mar 09 18:26:27 crc kubenswrapper[4821]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 09 18:26:27 crc kubenswrapper[4821]: --upstream=http://127.0.0.1:29108/ \ Mar 09 18:26:27 crc kubenswrapper[4821]: --tls-private-key-file=${TLS_PK} \ Mar 09 18:26:27 crc kubenswrapper[4821]: --tls-cert-file=${TLS_CERT} Mar 09 18:26:27 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9tc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-d6g54_openshift-ovn-kubernetes(e896f92d-7d30-4f36-b892-5c8c9c792530): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:27 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.556572 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" podUID="84199f52-999d-4a44-91c7-a343ba59b10d" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.557558 4821 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 09 18:26:27 crc kubenswrapper[4821]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ -f "/env/_master" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: set -o allexport Mar 09 18:26:27 crc kubenswrapper[4821]: source "/env/_master" Mar 09 18:26:27 crc kubenswrapper[4821]: set +o allexport Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v4_join_subnet_opt= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v6_join_subnet_opt= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v4_transit_switch_subnet_opt= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v6_transit_switch_subnet_opt= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "" != "" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: dns_name_resolver_enabled_flag= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "false" == "true" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: persistent_ips_enabled_flag= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "true" == "true" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: # This is needed so that converting clusters from GA to TP Mar 09 18:26:27 crc kubenswrapper[4821]: # will rollout control plane pods as well Mar 09 18:26:27 crc kubenswrapper[4821]: network_segmentation_enabled_flag= Mar 09 18:26:27 crc kubenswrapper[4821]: multi_network_enabled_flag= Mar 09 18:26:27 crc kubenswrapper[4821]: if [[ "true" == "true" ]]; then Mar 09 18:26:27 crc kubenswrapper[4821]: multi_network_enabled_flag="--enable-multi-network" Mar 09 18:26:27 crc kubenswrapper[4821]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 09 18:26:27 crc kubenswrapper[4821]: fi Mar 09 18:26:27 crc kubenswrapper[4821]: Mar 09 18:26:27 crc kubenswrapper[4821]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 09 18:26:27 crc kubenswrapper[4821]: exec /usr/bin/ovnkube \ Mar 09 18:26:27 crc kubenswrapper[4821]: --enable-interconnect \ Mar 09 18:26:27 crc kubenswrapper[4821]: --init-cluster-manager "${K8S_NODE}" \ Mar 09 18:26:27 crc kubenswrapper[4821]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 09 18:26:27 crc kubenswrapper[4821]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 09 18:26:27 crc kubenswrapper[4821]: --metrics-bind-address "127.0.0.1:29108" \ Mar 09 18:26:27 crc kubenswrapper[4821]: --metrics-enable-pprof \ Mar 09 18:26:27 crc kubenswrapper[4821]: --metrics-enable-config-duration \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${ovn_v4_join_subnet_opt} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${ovn_v6_join_subnet_opt} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${dns_name_resolver_enabled_flag} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${persistent_ips_enabled_flag} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${multi_network_enabled_flag} \ Mar 09 18:26:27 crc kubenswrapper[4821]: ${network_segmentation_enabled_flag} Mar 09 18:26:27 crc kubenswrapper[4821]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9tc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-d6g54_openshift-ovn-kubernetes(e896f92d-7d30-4f36-b892-5c8c9c792530): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 09 18:26:27 crc kubenswrapper[4821]: > logger="UnhandledError" Mar 09 18:26:27 crc kubenswrapper[4821]: E0309 18:26:27.561846 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" podUID="e896f92d-7d30-4f36-b892-5c8c9c792530" Mar 09 18:26:28 crc kubenswrapper[4821]: I0309 18:26:28.139378 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:28 crc kubenswrapper[4821]: E0309 18:26:28.139555 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:28 crc kubenswrapper[4821]: E0309 18:26:28.139642 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:26:44.139619444 +0000 UTC m=+141.300995330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:28 crc kubenswrapper[4821]: I0309 18:26:28.566150 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 09 18:26:28 crc kubenswrapper[4821]: E0309 18:26:28.648374 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:26:29 crc kubenswrapper[4821]: I0309 18:26:29.550639 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:29 crc kubenswrapper[4821]: I0309 18:26:29.550693 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:29 crc kubenswrapper[4821]: E0309 18:26:29.550777 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:29 crc kubenswrapper[4821]: E0309 18:26:29.550949 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:29 crc kubenswrapper[4821]: I0309 18:26:29.550639 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:29 crc kubenswrapper[4821]: I0309 18:26:29.551063 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:29 crc kubenswrapper[4821]: E0309 18:26:29.551073 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:29 crc kubenswrapper[4821]: E0309 18:26:29.551274 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:31 crc kubenswrapper[4821]: I0309 18:26:31.550793 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:31 crc kubenswrapper[4821]: E0309 18:26:31.551569 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:31 crc kubenswrapper[4821]: I0309 18:26:31.550957 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:31 crc kubenswrapper[4821]: E0309 18:26:31.551725 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:31 crc kubenswrapper[4821]: I0309 18:26:31.551038 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:31 crc kubenswrapper[4821]: E0309 18:26:31.551817 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:31 crc kubenswrapper[4821]: I0309 18:26:31.550903 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:31 crc kubenswrapper[4821]: E0309 18:26:31.551907 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.080705 4821 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.568361 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.568433 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.568452 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.568478 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.568500 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:32Z","lastTransitionTime":"2026-03-09T18:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:32 crc kubenswrapper[4821]: E0309 18:26:32.586432 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.591748 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.591812 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.591830 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.591855 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.591872 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:32Z","lastTransitionTime":"2026-03-09T18:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:32 crc kubenswrapper[4821]: E0309 18:26:32.608215 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.613614 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.613837 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.614063 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.614268 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.614543 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:32Z","lastTransitionTime":"2026-03-09T18:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:32 crc kubenswrapper[4821]: E0309 18:26:32.634486 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.638557 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.638729 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.638804 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.638894 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.638989 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:32Z","lastTransitionTime":"2026-03-09T18:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:32 crc kubenswrapper[4821]: E0309 18:26:32.652553 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.657149 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.657272 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.657408 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.657531 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:32 crc kubenswrapper[4821]: I0309 18:26:32.657638 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:32Z","lastTransitionTime":"2026-03-09T18:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:32 crc kubenswrapper[4821]: E0309 18:26:32.672604 4821 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404548Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865348Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"acfa7603-a6f5-410a-9ddc-2890d44e3c69\\\",\\\"systemUUID\\\":\\\"ea3e2df3-251a-4cdb-9064-3c52ac509aba\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:32 crc kubenswrapper[4821]: E0309 18:26:32.672948 4821 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.550792 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:33 crc kubenswrapper[4821]: E0309 18:26:33.551014 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.551447 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.551495 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:33 crc kubenswrapper[4821]: E0309 18:26:33.551588 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.551832 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:33 crc kubenswrapper[4821]: E0309 18:26:33.551988 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:33 crc kubenswrapper[4821]: E0309 18:26:33.552367 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.568070 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7568663-6e12-4416-8458-876039975b66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff6183e0d0f727aec5f6dafa913ed13e8db75cc46091299fae8aec174666c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308c314653a096e47edca451fd6c393178d5eea9108d87675df44140c5e78be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9316753bcc7ca3bc13d940c02db82d3c087567b9eb93baeacaeec2c2b39833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.568232 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.584991 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.599535 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.610435 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.623217 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.635956 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: E0309 18:26:33.649606 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.652943 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.668207 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.687089 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.713590 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.733582 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.747431 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.762979 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.790313 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.806241 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.820375 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.836038 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:33 crc kubenswrapper[4821]: I0309 18:26:33.850354 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.074163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lw2hk" event={"ID":"1a255bc9-2034-4a34-8240-f1fd42e808bd","Type":"ContainerStarted","Data":"79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5"} Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.095570 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.119611 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.132924 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.142798 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77f38666-2c08-47e4-8639-cd14ef9e6bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23eaea79e337ba5c55933a05ec5fd32a1fde011d35215cf356bd6d0948d310be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.155612 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.170312 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.185739 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.201412 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.216340 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.228842 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.240012 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.266399 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.278910 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.285716 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.296563 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.303926 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.311613 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7568663-6e12-4416-8458-876039975b66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff6183e0d0f727aec5f6dafa913ed13e8db75cc46091299fae8aec174666c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308c314653a096e47edca451fd6c393178d5eea9108d87675df44140c5e78be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9316753bcc7ca3bc13d940c02db82d3c087567b9eb93baeacaeec2c2b39833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.327347 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.337783 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.551251 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.551314 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:35 crc kubenswrapper[4821]: E0309 18:26:35.551526 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.551620 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:35 crc kubenswrapper[4821]: I0309 18:26:35.551634 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:35 crc kubenswrapper[4821]: E0309 18:26:35.551764 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:35 crc kubenswrapper[4821]: E0309 18:26:35.551900 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:35 crc kubenswrapper[4821]: E0309 18:26:35.552046 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.083180 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n9tvt" event={"ID":"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7","Type":"ContainerStarted","Data":"b5babbacf1870b872a072153860076cf2bc0bf0d8d298e0741152236697a3587"} Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.094156 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"ba92ff42b49f0e1542393701bf7c7544c69196a88aceed9afc6c3f654fd8c54e"} Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.094224 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777"} Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.096879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mfdmq" event={"ID":"a085b570-506c-4b51-b0d1-4b9832e71c0f","Type":"ContainerStarted","Data":"c751f970dffd2687fdc572fcc3a4805b92cbec5ba8652a5b8505b28e4ee0a16c"} Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.099224 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"408a33e34df19093da69a73d168cf21bdc6e6990bdc71a255473d65da3bc3e6f"} Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.105577 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.117965 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77f38666-2c08-47e4-8639-cd14ef9e6bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23eaea79e337ba5c55933a05ec5fd32a1fde011d35215cf356bd6d0948d310be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.131024 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.145480 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.174906 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.191699 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.203239 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.216240 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.238051 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.252406 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.262574 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.278014 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.287003 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.298866 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7568663-6e12-4416-8458-876039975b66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff6183e0d0f727aec5f6dafa913ed13e8db75cc46091299fae8aec174666c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308c314653a096e47edca451fd6c393178d5eea9108d87675df44140c5e78be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9316753bcc7ca3bc13d940c02db82d3c087567b9eb93baeacaeec2c2b39833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.310174 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.320114 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.330759 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5babbacf1870b872a072153860076cf2bc0bf0d8d298e0741152236697a3587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.340591 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.350854 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.361115 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.372202 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.386132 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.414578 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.430970 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408a33e34df19093da69a73d168cf21bdc6e6990bdc71a255473d65da3bc3e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.445536 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.459449 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.470438 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.485656 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7568663-6e12-4416-8458-876039975b66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff6183e0d0f727aec5f6dafa913ed13e8db75cc46091299fae8aec174666c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308c314653a096e47edca451fd6c393178d5eea9108d87675df44140c5e78be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9316753bcc7ca3bc13d940c02db82d3c087567b9eb93baeacaeec2c2b39833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.499565 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.516055 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.527968 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5babbacf1870b872a072153860076cf2bc0bf0d8d298e0741152236697a3587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.538098 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c751f970dffd2687fdc572fcc3a4805b92cbec5ba8652a5b8505b28e4ee0a16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.548169 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.551069 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.551086 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.551120 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.551150 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:37 crc kubenswrapper[4821]: E0309 18:26:37.551155 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:37 crc kubenswrapper[4821]: E0309 18:26:37.551260 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:37 crc kubenswrapper[4821]: E0309 18:26:37.551286 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:37 crc kubenswrapper[4821]: E0309 18:26:37.551353 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.568544 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.576425 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77f38666-2c08-47e4-8639-cd14ef9e6bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23eaea79e337ba5c55933a05ec5fd32a1fde011d35215cf356bd6d0948d310be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.589618 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba92ff42b49f0e1542393701bf7c7544c69196a88aceed9afc6c3f654fd8c54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.603413 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:37 crc kubenswrapper[4821]: I0309 18:26:37.621087 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.103747 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b0a44673ff624743258784737c959343f8666b57a532b089b03bf614a16cd3e1"} Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.120210 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lw2hk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a255bc9-2034-4a34-8240-f1fd42e808bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z9r74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lw2hk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.133282 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e896f92d-7d30-4f36-b892-5c8c9c792530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n9tc7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-d6g54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.149335 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.161656 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.174190 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a44673ff624743258784737c959343f8666b57a532b089b03bf614a16cd3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.192427 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db9fe72-f0df-4db6-9991-0384645cf456\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee773d19ae0091661f56157410437678fcb5f7213b187831146af99d8d76b555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://339c14ffd3fdc5b3377a73069f4ada2bbb0470002cb2adbe540e5c52449e7f5e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:24:56Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0309 18:24:25.811998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0309 18:24:25.817356 1 observer_polling.go:159] Starting file observer\\\\nI0309 18:24:25.865522 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0309 18:24:25.872931 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0309 18:24:56.217530 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:24:55Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f06cb76ad82efe140b94c5f83a654bdc44720a48914f1e5de02e29f6f39a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d992ce0b12a7e1c7641f515c07c7748e06cac1d81d3f351c6cf892f4c1ea78e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.214069 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dab0b54-0d0b-436b-a566-bfca5bd198ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://522c0dddb01996f686347f24c7bb98c6d809b87a931937d84836209b48cc6dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ff295a80771fb28c01986f0ef5e0b866a69db058c2174861cb493f7dc11113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceaf482fe2e2b73cc1d174411b0df7990a9e2bb1f3eff9adef94086b3eab6d27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a831c318b5ca35517b0616bc5c6cf15592ee2a4e251dab6cc7886c27c7dd71bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e8c5d8edd2b525ecd0be6ed468e278fabd39079ad9af5041d4be213f7d39072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2864c08259f2e7e2766cd5b6e26d546a6f8443728a1cf323015aab34612b8fcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276f57582635b405b3eec7b3a61b1761dcc3228b77ea4d33c281f5542437391c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ab53e73f777f3d3e8fcdff88e66f8351b27db87e8448cbf05d0e247beb3dda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.228902 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408a33e34df19093da69a73d168cf21bdc6e6990bdc71a255473d65da3bc3e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.240131 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9tvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b53a5b8b-3dab-4300-8b7b-c3df20eab3b7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5babbacf1870b872a072153860076cf2bc0bf0d8d298e0741152236697a3587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99m5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9tvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.251255 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mfdmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a085b570-506c-4b51-b0d1-4b9832e71c0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c751f970dffd2687fdc572fcc3a4805b92cbec5ba8652a5b8505b28e4ee0a16c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mzff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mfdmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.262478 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ac2c88b-a0bc-482c-90fa-165d30f045e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mdrdp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:26:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lf7bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.272194 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7568663-6e12-4416-8458-876039975b66\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ff6183e0d0f727aec5f6dafa913ed13e8db75cc46091299fae8aec174666c91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308c314653a096e47edca451fd6c393178d5eea9108d87675df44140c5e78be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9316753bcc7ca3bc13d940c02db82d3c087567b9eb93baeacaeec2c2b39833\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b955f1ffab31936f17fcc78623216cfa4326b454caa973c491626fc7fce485ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.288109 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.305184 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.325843 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84199f52-999d-4a44-91c7-a343ba59b10d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4xxjq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-b9gd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.346376 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e368ce-5f0d-4208-a1de-67d4ab591f82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c9kmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bfdsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.357828 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55fc2290-6300-4f7d-98d7-8abdde521a83\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-09T18:25:24Z\\\",\\\"message\\\":\\\"le observer\\\\nW0309 18:25:24.128997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0309 18:25:24.129108 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0309 18:25:24.129664 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2492006033/tls.crt::/tmp/serving-cert-2492006033/tls.key\\\\\\\"\\\\nI0309 18:25:24.405897 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0309 18:25:24.410101 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0309 18:25:24.410116 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0309 18:25:24.410139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0309 18:25:24.410148 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0309 18:25:24.421807 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0309 18:25:24.421854 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421867 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0309 18:25:24.421878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0309 18:25:24.421885 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0309 18:25:24.421866 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0309 18:25:24.421891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0309 18:25:24.421927 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0309 18:25:24.422998 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-09T18:25:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.365200 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77f38666-2c08-47e4-8639-cd14ef9e6bf7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:24:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23eaea79e337ba5c55933a05ec5fd32a1fde011d35215cf356bd6d0948d310be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:24:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41197b9f49dbf4569ca6e06c38d94de4e47f0fd4b8902655e560faeeeefcc055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-09T18:24:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-09T18:24:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:24:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: I0309 18:26:38.374967 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3270571a-a484-4e66-8035-f43509b58add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-09T18:26:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba92ff42b49f0e1542393701bf7c7544c69196a88aceed9afc6c3f654fd8c54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-09T18:26:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jqk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-09T18:25:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kk7gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 09 18:26:38 crc kubenswrapper[4821]: E0309 18:26:38.650415 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.109370 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" exitCode=0 Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.109437 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.112169 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"32a23354fefd656cd00a4d4642187125832562ca980c68ec6334a5b21266b0dc"} Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.112243 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"30915e4b767dc4e1d2dd432c5675a6cc722ee3ff152d8ff1e5014e0e1cec0074"} Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.131857 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:26:39Z is after 2025-08-24T17:21:41Z" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.152816 4821 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-09T18:25:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-09T18:26:39Z is after 2025-08-24T17:21:41Z" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.194271 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n9tvt" podStartSLOduration=79.194247439 podStartE2EDuration="1m19.194247439s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.177539742 +0000 UTC m=+136.338915628" watchObservedRunningTime="2026-03-09 18:26:39.194247439 +0000 UTC m=+136.355623305" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.208047 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mfdmq" podStartSLOduration=79.20802258 podStartE2EDuration="1m19.20802258s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.194802655 +0000 UTC m=+136.356178531" watchObservedRunningTime="2026-03-09 18:26:39.20802258 +0000 UTC m=+136.369398446" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.224260 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=11.224229492 podStartE2EDuration="11.224229492s" podCreationTimestamp="2026-03-09 18:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.22381096 +0000 UTC m=+136.385186826" watchObservedRunningTime="2026-03-09 18:26:39.224229492 +0000 UTC m=+136.385605398" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.237010 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.236979453 podStartE2EDuration="6.236979453s" podCreationTimestamp="2026-03-09 18:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.236522159 +0000 UTC m=+136.397898025" watchObservedRunningTime="2026-03-09 18:26:39.236979453 +0000 UTC m=+136.398355359" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.252446 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podStartSLOduration=79.252422772 podStartE2EDuration="1m19.252422772s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.252248647 +0000 UTC m=+136.413624543" watchObservedRunningTime="2026-03-09 18:26:39.252422772 +0000 UTC m=+136.413798648" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.325251 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=56.325226071 podStartE2EDuration="56.325226071s" podCreationTimestamp="2026-03-09 18:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.323716137 +0000 UTC m=+136.485092003" watchObservedRunningTime="2026-03-09 18:26:39.325226071 +0000 UTC m=+136.486601937" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.362848 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lw2hk" podStartSLOduration=79.362827645 podStartE2EDuration="1m19.362827645s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.362133565 +0000 UTC m=+136.523509441" watchObservedRunningTime="2026-03-09 18:26:39.362827645 +0000 UTC m=+136.524203501" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.395131 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=27.395111345 podStartE2EDuration="27.395111345s" podCreationTimestamp="2026-03-09 18:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.394380414 +0000 UTC m=+136.555756290" watchObservedRunningTime="2026-03-09 18:26:39.395111345 +0000 UTC m=+136.556487201" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.470481 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=22.470466179 podStartE2EDuration="22.470466179s" podCreationTimestamp="2026-03-09 18:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:39.470215522 +0000 UTC m=+136.631591398" watchObservedRunningTime="2026-03-09 18:26:39.470466179 +0000 UTC m=+136.631842035" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.551267 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.551339 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.551283 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:39 crc kubenswrapper[4821]: I0309 18:26:39.551272 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:39 crc kubenswrapper[4821]: E0309 18:26:39.551484 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:39 crc kubenswrapper[4821]: E0309 18:26:39.551566 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:39 crc kubenswrapper[4821]: E0309 18:26:39.551658 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:39 crc kubenswrapper[4821]: E0309 18:26:39.551738 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:40 crc kubenswrapper[4821]: I0309 18:26:40.120542 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} Mar 09 18:26:40 crc kubenswrapper[4821]: I0309 18:26:40.121035 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} Mar 09 18:26:40 crc kubenswrapper[4821]: I0309 18:26:40.121054 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} Mar 09 18:26:40 crc kubenswrapper[4821]: I0309 18:26:40.121086 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} Mar 09 18:26:40 crc kubenswrapper[4821]: I0309 18:26:40.121100 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} Mar 09 18:26:40 crc kubenswrapper[4821]: I0309 18:26:40.121115 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} Mar 09 18:26:41 crc kubenswrapper[4821]: I0309 18:26:41.551122 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:41 crc kubenswrapper[4821]: I0309 18:26:41.551315 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:41 crc kubenswrapper[4821]: I0309 18:26:41.551459 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:41 crc kubenswrapper[4821]: E0309 18:26:41.551454 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:41 crc kubenswrapper[4821]: E0309 18:26:41.551639 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:41 crc kubenswrapper[4821]: I0309 18:26:41.551717 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:41 crc kubenswrapper[4821]: E0309 18:26:41.551912 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:41 crc kubenswrapper[4821]: E0309 18:26:41.552415 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.131406 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" event={"ID":"e896f92d-7d30-4f36-b892-5c8c9c792530","Type":"ContainerStarted","Data":"c6e7283b8a8f602250e8d48cd8297fa083b4cd93c9ead1fd2819df4897231ffc"} Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.131476 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" event={"ID":"e896f92d-7d30-4f36-b892-5c8c9c792530","Type":"ContainerStarted","Data":"bfb2f3af6574142c13631bd4a20945aab626df7163c81fe58ee5b43fc1ca978a"} Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.136763 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.755827 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.756105 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.756114 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.756129 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.756139 4821 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-09T18:26:42Z","lastTransitionTime":"2026-03-09T18:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.831717 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv"] Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.832067 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.833811 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.834278 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.834278 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.835766 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.896162 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/489cb5f6-bb44-4678-80cb-e399f23658bc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.896210 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/489cb5f6-bb44-4678-80cb-e399f23658bc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.896246 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489cb5f6-bb44-4678-80cb-e399f23658bc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.896374 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/489cb5f6-bb44-4678-80cb-e399f23658bc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.896474 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/489cb5f6-bb44-4678-80cb-e399f23658bc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.997817 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/489cb5f6-bb44-4678-80cb-e399f23658bc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.997905 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/489cb5f6-bb44-4678-80cb-e399f23658bc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.997958 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489cb5f6-bb44-4678-80cb-e399f23658bc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.997997 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/489cb5f6-bb44-4678-80cb-e399f23658bc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.998053 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/489cb5f6-bb44-4678-80cb-e399f23658bc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.998145 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/489cb5f6-bb44-4678-80cb-e399f23658bc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.998265 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/489cb5f6-bb44-4678-80cb-e399f23658bc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:42 crc kubenswrapper[4821]: I0309 18:26:42.998878 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/489cb5f6-bb44-4678-80cb-e399f23658bc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.007445 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/489cb5f6-bb44-4678-80cb-e399f23658bc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.016305 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/489cb5f6-bb44-4678-80cb-e399f23658bc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qnxgv\" (UID: \"489cb5f6-bb44-4678-80cb-e399f23658bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.143027 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerStarted","Data":"dd38ee0c7ed1fcf0cc93d002f146ebe093b6b5cafa12c741304400ddae004c64"} Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.162020 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-d6g54" podStartSLOduration=83.161992027 podStartE2EDuration="1m23.161992027s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:43.160400521 +0000 UTC m=+140.321776387" watchObservedRunningTime="2026-03-09 18:26:43.161992027 +0000 UTC m=+140.323367923" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.244519 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" Mar 09 18:26:43 crc kubenswrapper[4821]: W0309 18:26:43.266633 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod489cb5f6_bb44_4678_80cb_e399f23658bc.slice/crio-d04ff3255369658d0caa9e1e7a89eaf07aa29956c1ceeddd4f6d4a9b09a61e4e WatchSource:0}: Error finding container d04ff3255369658d0caa9e1e7a89eaf07aa29956c1ceeddd4f6d4a9b09a61e4e: Status 404 returned error can't find the container with id d04ff3255369658d0caa9e1e7a89eaf07aa29956c1ceeddd4f6d4a9b09a61e4e Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.551076 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.551091 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.551146 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.551523 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:43 crc kubenswrapper[4821]: E0309 18:26:43.553377 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:43 crc kubenswrapper[4821]: E0309 18:26:43.553551 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:43 crc kubenswrapper[4821]: E0309 18:26:43.553622 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:43 crc kubenswrapper[4821]: E0309 18:26:43.553663 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.593521 4821 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 09 18:26:43 crc kubenswrapper[4821]: I0309 18:26:43.602494 4821 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 09 18:26:43 crc kubenswrapper[4821]: E0309 18:26:43.651461 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:26:44 crc kubenswrapper[4821]: I0309 18:26:44.150060 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" event={"ID":"489cb5f6-bb44-4678-80cb-e399f23658bc","Type":"ContainerStarted","Data":"cc989060a9641acd11c7e893ec65a3948ca6076cff4b6df3c5e325ce05adcf8a"} Mar 09 18:26:44 crc kubenswrapper[4821]: I0309 18:26:44.150130 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" event={"ID":"489cb5f6-bb44-4678-80cb-e399f23658bc","Type":"ContainerStarted","Data":"d04ff3255369658d0caa9e1e7a89eaf07aa29956c1ceeddd4f6d4a9b09a61e4e"} Mar 09 18:26:44 crc kubenswrapper[4821]: I0309 18:26:44.157161 4821 generic.go:334] "Generic (PLEG): container finished" podID="84199f52-999d-4a44-91c7-a343ba59b10d" containerID="dd38ee0c7ed1fcf0cc93d002f146ebe093b6b5cafa12c741304400ddae004c64" exitCode=0 Mar 09 18:26:44 crc kubenswrapper[4821]: I0309 18:26:44.157252 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerDied","Data":"dd38ee0c7ed1fcf0cc93d002f146ebe093b6b5cafa12c741304400ddae004c64"} Mar 09 18:26:44 crc kubenswrapper[4821]: I0309 18:26:44.212597 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:44 crc kubenswrapper[4821]: E0309 18:26:44.214128 4821 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:44 crc kubenswrapper[4821]: E0309 18:26:44.214212 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs podName:9ac2c88b-a0bc-482c-90fa-165d30f045e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:16.214188623 +0000 UTC m=+173.375564519 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs") pod "network-metrics-daemon-lf7bd" (UID: "9ac2c88b-a0bc-482c-90fa-165d30f045e8") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 18:26:44 crc kubenswrapper[4821]: I0309 18:26:44.218376 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qnxgv" podStartSLOduration=84.218353124 podStartE2EDuration="1m24.218353124s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:44.176416044 +0000 UTC m=+141.337791950" watchObservedRunningTime="2026-03-09 18:26:44.218353124 +0000 UTC m=+141.379728990" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.167598 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerStarted","Data":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.167693 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.167711 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.167784 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.170086 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerStarted","Data":"e3b0f87a940d0bcfd5dcaf882b75a6df276924f74cebca9c4a74a4f437a3cb23"} Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.214652 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podStartSLOduration=85.214616952 podStartE2EDuration="1m25.214616952s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:45.209188264 +0000 UTC m=+142.370564120" watchObservedRunningTime="2026-03-09 18:26:45.214616952 +0000 UTC m=+142.375992888" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.224299 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.224826 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.553511 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.553615 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.553668 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:45 crc kubenswrapper[4821]: E0309 18:26:45.554243 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:45 crc kubenswrapper[4821]: E0309 18:26:45.554059 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:45 crc kubenswrapper[4821]: I0309 18:26:45.553690 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:45 crc kubenswrapper[4821]: E0309 18:26:45.554462 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:45 crc kubenswrapper[4821]: E0309 18:26:45.554561 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:46 crc kubenswrapper[4821]: I0309 18:26:46.174653 4821 generic.go:334] "Generic (PLEG): container finished" podID="84199f52-999d-4a44-91c7-a343ba59b10d" containerID="e3b0f87a940d0bcfd5dcaf882b75a6df276924f74cebca9c4a74a4f437a3cb23" exitCode=0 Mar 09 18:26:46 crc kubenswrapper[4821]: I0309 18:26:46.174764 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerDied","Data":"e3b0f87a940d0bcfd5dcaf882b75a6df276924f74cebca9c4a74a4f437a3cb23"} Mar 09 18:26:46 crc kubenswrapper[4821]: I0309 18:26:46.546818 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lf7bd"] Mar 09 18:26:46 crc kubenswrapper[4821]: I0309 18:26:46.546913 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:46 crc kubenswrapper[4821]: E0309 18:26:46.546996 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.179063 4821 generic.go:334] "Generic (PLEG): container finished" podID="84199f52-999d-4a44-91c7-a343ba59b10d" containerID="6bd2deb273a833e36b442067077d576ad550712e48efff02224fe5bc9b79bc3a" exitCode=0 Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.179146 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerDied","Data":"6bd2deb273a833e36b442067077d576ad550712e48efff02224fe5bc9b79bc3a"} Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.449502 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.449782 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:51.449762169 +0000 UTC m=+208.611138045 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.550920 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.551256 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.550995 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.551374 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551476 4821 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.551526 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.551181 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551717 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:51.551522232 +0000 UTC m=+208.712898208 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551764 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551862 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551906 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551907 4821 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: I0309 18:26:47.551923 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551930 4821 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551938 4821 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.552061 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:51.552035407 +0000 UTC m=+208.713411273 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.551939 4821 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.552106 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:51.552085778 +0000 UTC m=+208.713461664 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.552132 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:51.552119779 +0000 UTC m=+208.713495755 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.552467 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.552547 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:47 crc kubenswrapper[4821]: E0309 18:26:47.552627 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:48 crc kubenswrapper[4821]: I0309 18:26:48.187352 4821 generic.go:334] "Generic (PLEG): container finished" podID="84199f52-999d-4a44-91c7-a343ba59b10d" containerID="14eea77fc84395826bba07534a8aef9e01864b45bcfebf4475163da97935b7ef" exitCode=0 Mar 09 18:26:48 crc kubenswrapper[4821]: I0309 18:26:48.187400 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerDied","Data":"14eea77fc84395826bba07534a8aef9e01864b45bcfebf4475163da97935b7ef"} Mar 09 18:26:48 crc kubenswrapper[4821]: I0309 18:26:48.551381 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:48 crc kubenswrapper[4821]: E0309 18:26:48.551583 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:48 crc kubenswrapper[4821]: E0309 18:26:48.653636 4821 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:26:49 crc kubenswrapper[4821]: I0309 18:26:49.196603 4821 generic.go:334] "Generic (PLEG): container finished" podID="84199f52-999d-4a44-91c7-a343ba59b10d" containerID="9dee21b801ce9769c8d535e49d1c5175c2aec16b8537b8a27b0378378474c0ec" exitCode=0 Mar 09 18:26:49 crc kubenswrapper[4821]: I0309 18:26:49.196798 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerDied","Data":"9dee21b801ce9769c8d535e49d1c5175c2aec16b8537b8a27b0378378474c0ec"} Mar 09 18:26:49 crc kubenswrapper[4821]: I0309 18:26:49.550844 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:49 crc kubenswrapper[4821]: E0309 18:26:49.551027 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:49 crc kubenswrapper[4821]: I0309 18:26:49.551302 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:49 crc kubenswrapper[4821]: E0309 18:26:49.551468 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:49 crc kubenswrapper[4821]: I0309 18:26:49.551539 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:49 crc kubenswrapper[4821]: E0309 18:26:49.551725 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:50 crc kubenswrapper[4821]: I0309 18:26:50.207684 4821 generic.go:334] "Generic (PLEG): container finished" podID="84199f52-999d-4a44-91c7-a343ba59b10d" containerID="780ff4742ee1459ef374f59f75f2d41f98ce8767a8ecc8d6ee28ec6bd01faabc" exitCode=0 Mar 09 18:26:50 crc kubenswrapper[4821]: I0309 18:26:50.207778 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerDied","Data":"780ff4742ee1459ef374f59f75f2d41f98ce8767a8ecc8d6ee28ec6bd01faabc"} Mar 09 18:26:50 crc kubenswrapper[4821]: I0309 18:26:50.551029 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:50 crc kubenswrapper[4821]: E0309 18:26:50.551176 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:51 crc kubenswrapper[4821]: I0309 18:26:51.216712 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" event={"ID":"84199f52-999d-4a44-91c7-a343ba59b10d","Type":"ContainerStarted","Data":"13ec3e6c3f91342b58eeb431e0fc197e47bfe6befb9a05fb169d78f9848a2837"} Mar 09 18:26:51 crc kubenswrapper[4821]: I0309 18:26:51.552023 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:51 crc kubenswrapper[4821]: E0309 18:26:51.552538 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:51 crc kubenswrapper[4821]: I0309 18:26:51.553004 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:51 crc kubenswrapper[4821]: E0309 18:26:51.553234 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:51 crc kubenswrapper[4821]: I0309 18:26:51.553652 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:51 crc kubenswrapper[4821]: E0309 18:26:51.554027 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:52 crc kubenswrapper[4821]: I0309 18:26:52.551048 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:52 crc kubenswrapper[4821]: E0309 18:26:52.551227 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lf7bd" podUID="9ac2c88b-a0bc-482c-90fa-165d30f045e8" Mar 09 18:26:53 crc kubenswrapper[4821]: I0309 18:26:53.550614 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:53 crc kubenswrapper[4821]: I0309 18:26:53.550712 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:53 crc kubenswrapper[4821]: I0309 18:26:53.550724 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:53 crc kubenswrapper[4821]: E0309 18:26:53.551703 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 09 18:26:53 crc kubenswrapper[4821]: E0309 18:26:53.551788 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 09 18:26:53 crc kubenswrapper[4821]: E0309 18:26:53.551863 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 09 18:26:54 crc kubenswrapper[4821]: I0309 18:26:54.551096 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:26:54 crc kubenswrapper[4821]: I0309 18:26:54.554410 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 09 18:26:54 crc kubenswrapper[4821]: I0309 18:26:54.554742 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.550665 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.550735 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.551792 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.555190 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.555863 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.555396 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 09 18:26:55 crc kubenswrapper[4821]: I0309 18:26:55.555410 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 18:27:00 crc kubenswrapper[4821]: I0309 18:27:00.305473 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:27:00 crc kubenswrapper[4821]: I0309 18:27:00.353488 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-b9gd4" podStartSLOduration=100.353459554 podStartE2EDuration="1m40.353459554s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:26:51.255669127 +0000 UTC m=+148.417045023" watchObservedRunningTime="2026-03-09 18:27:00.353459554 +0000 UTC m=+157.514835440" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.435815 4821 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.489579 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m6q6r"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.490790 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.493815 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.493914 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.496014 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.496660 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.496924 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.497069 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.498893 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.499002 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.499176 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.499238 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.507561 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h8j2t"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.508900 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.510722 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.511514 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.514943 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jrr9g"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.515741 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.517006 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.519406 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tnl4x"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.520411 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.521105 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.521990 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.522901 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.522971 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.523177 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.526412 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vmzvr"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.527036 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.527872 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.528556 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.542260 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.543008 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.543674 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.543969 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.544256 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.544734 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.544844 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.545252 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.545365 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.545541 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.545607 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.545828 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.554424 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.554709 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.554539 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.555234 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.555479 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea3fa689-2665-423f-b717-f2e279be3831-audit-dir\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.555565 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-serving-cert\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.555691 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd2br\" (UniqueName: \"kubernetes.io/projected/ea3fa689-2665-423f-b717-f2e279be3831-kube-api-access-fd2br\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.555880 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea3fa689-2665-423f-b717-f2e279be3831-node-pullsecrets\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.555938 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-audit\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.556126 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-etcd-serving-ca\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.556221 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-etcd-client\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.556464 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-encryption-config\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.556511 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-config\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.556738 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-image-import-ca\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.557285 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.557414 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.557616 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.557981 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.558609 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.558847 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.559205 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.559226 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.559460 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.559568 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.560415 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.545254 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.566535 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.566599 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.587489 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.587658 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.588264 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.588485 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.588562 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.588689 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.588723 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589190 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589284 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589338 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589296 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589412 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589443 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.589814 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.590350 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.590459 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.599718 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600150 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600375 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-znqzp"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600442 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600657 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600689 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600967 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.601162 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-295wb"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.601256 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.600703 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.601353 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.602080 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.602887 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xbxp5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.603215 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sh5rp"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.603487 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.604526 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.609406 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.611565 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.611895 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.615905 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4ntmx"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.616511 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.620391 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x9nnw"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.620987 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.621572 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.622094 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.622413 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.622852 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.622866 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.623107 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.626122 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.627260 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.627622 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.631922 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.632119 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.632841 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.633481 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.633678 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.634289 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.634486 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635026 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635181 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635294 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635464 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635644 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635788 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.635904 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636088 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636462 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636621 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636642 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636810 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636836 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.636980 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.637140 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.637296 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.637678 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.637832 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.637971 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.638286 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.638491 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.638496 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.639405 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nw2x"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.644434 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.645049 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.645283 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.645524 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.645614 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.645785 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.644585 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.646235 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.647823 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.653430 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658239 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea3fa689-2665-423f-b717-f2e279be3831-node-pullsecrets\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658286 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-audit\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658312 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658365 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5g25\" (UniqueName: \"kubernetes.io/projected/a3e149e2-c719-4025-888c-3134dd07b7c4-kube-api-access-s5g25\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658395 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cebe05d8-86f1-4280-9ae0-8065f9c38759-serving-cert\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658419 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2hj\" (UniqueName: \"kubernetes.io/projected/24aa1fc6-da2a-400c-8bfe-022af0ee3707-kube-api-access-ds2hj\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658439 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658469 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c7fa5b-e1e9-43c4-9942-409c34ea5660-serving-cert\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658493 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-images\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658511 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658527 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658543 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-config\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658560 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-config\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658573 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-image-import-ca\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658587 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aa1fc6-da2a-400c-8bfe-022af0ee3707-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658606 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3e149e2-c719-4025-888c-3134dd07b7c4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658620 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-trusted-ca\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658633 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78fc\" (UniqueName: \"kubernetes.io/projected/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-kube-api-access-n78fc\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658667 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658683 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658698 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658713 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658729 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea3fa689-2665-423f-b717-f2e279be3831-audit-dir\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658762 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2m2p\" (UniqueName: \"kubernetes.io/projected/87962440-47ce-4659-a2a7-f00110cc3bd5-kube-api-access-p2m2p\") pod \"dns-operator-744455d44c-znqzp\" (UID: \"87962440-47ce-4659-a2a7-f00110cc3bd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658779 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45565d5-bd55-4e94-8cac-0155e00f1368-serving-cert\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658795 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xrr5\" (UniqueName: \"kubernetes.io/projected/cebe05d8-86f1-4280-9ae0-8065f9c38759-kube-api-access-4xrr5\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658809 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzbc\" (UniqueName: \"kubernetes.io/projected/87c7fa5b-e1e9-43c4-9942-409c34ea5660-kube-api-access-dwzbc\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658824 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd2br\" (UniqueName: \"kubernetes.io/projected/ea3fa689-2665-423f-b717-f2e279be3831-kube-api-access-fd2br\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658851 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-client-ca\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658865 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-dir\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658882 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658902 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3e149e2-c719-4025-888c-3134dd07b7c4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658932 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-config\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658974 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.658999 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-etcd-serving-ca\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659036 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-client-ca\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659059 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659081 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659101 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-etcd-client\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659116 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-encryption-config\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659130 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659146 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87962440-47ce-4659-a2a7-f00110cc3bd5-metrics-tls\") pod \"dns-operator-744455d44c-znqzp\" (UID: \"87962440-47ce-4659-a2a7-f00110cc3bd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659162 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5msr4\" (UniqueName: \"kubernetes.io/projected/d45565d5-bd55-4e94-8cac-0155e00f1368-kube-api-access-5msr4\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659179 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2js\" (UniqueName: \"kubernetes.io/projected/f078c2bb-b4ba-42a0-a66c-705c19866fec-kube-api-access-gn2js\") pod \"downloads-7954f5f757-295wb\" (UID: \"f078c2bb-b4ba-42a0-a66c-705c19866fec\") " pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659194 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wwp\" (UniqueName: \"kubernetes.io/projected/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-kube-api-access-f7wwp\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659217 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56mgj\" (UniqueName: \"kubernetes.io/projected/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-kube-api-access-56mgj\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659241 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9bk\" (UniqueName: \"kubernetes.io/projected/6d35d28f-2377-46c5-95aa-ea3bf280a60e-kube-api-access-bf9bk\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659262 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-config\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659295 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-serving-cert\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659336 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a663703c-95db-4871-b31c-00951488935d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gbjt5\" (UID: \"a663703c-95db-4871-b31c-00951488935d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659352 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-service-ca-bundle\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659368 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659384 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-metrics-tls\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659398 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fks\" (UniqueName: \"kubernetes.io/projected/a663703c-95db-4871-b31c-00951488935d-kube-api-access-h6fks\") pod \"cluster-samples-operator-665b6dd947-gbjt5\" (UID: \"a663703c-95db-4871-b31c-00951488935d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659413 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-policies\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659434 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.659466 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aa1fc6-da2a-400c-8bfe-022af0ee3707-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.660044 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea3fa689-2665-423f-b717-f2e279be3831-node-pullsecrets\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.660533 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3e149e2-c719-4025-888c-3134dd07b7c4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.660572 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-config\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.660589 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.660757 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-audit\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.661049 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-etcd-serving-ca\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.661223 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-config\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.661857 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea3fa689-2665-423f-b717-f2e279be3831-audit-dir\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.662059 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.663215 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea3fa689-2665-423f-b717-f2e279be3831-image-import-ca\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.664728 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.665087 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.665743 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.665997 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.666366 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.666730 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d9kvs"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.667059 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.667205 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.667295 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l97hl"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.667541 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.667706 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.668183 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.668774 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.668830 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.669399 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.669540 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.669917 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gg4ds"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.669954 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.670181 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.670257 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.670701 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.670836 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.670999 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.671271 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.671468 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.671545 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4pwqq"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.671836 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.671924 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.671993 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.672255 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.672404 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.673903 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.674125 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.674690 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.676602 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h8j2t"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.678975 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-etcd-client\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.680786 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-serving-cert\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.680869 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-sst54"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.681970 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m6q6r"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.682074 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.683590 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vmzvr"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.688130 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551346-phdwt"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.689073 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.689256 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.690845 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea3fa689-2665-423f-b717-f2e279be3831-encryption-config\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.692533 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-295wb"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.695033 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jrr9g"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.697452 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.699680 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.699985 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tnl4x"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.701167 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sh5rp"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.704451 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nw2x"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.704533 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.706912 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-djphk"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.707667 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d9kvs"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.707782 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.708140 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.710919 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xbxp5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.714137 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.716522 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.717366 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.718657 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-znqzp"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.721002 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.721271 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.721902 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.723335 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.724509 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.726511 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.727537 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.728533 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.737871 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x9nnw"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.738483 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.738577 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.739532 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l97hl"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.743368 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gg4ds"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.744467 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.747466 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551346-phdwt"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.750284 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.750347 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vdrsg"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.751417 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-c257s"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.751719 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.751988 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c257s" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.752301 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4pwqq"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.755364 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-djphk"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.755551 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761279 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9bk\" (UniqueName: \"kubernetes.io/projected/6d35d28f-2377-46c5-95aa-ea3bf280a60e-kube-api-access-bf9bk\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761328 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-config\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761348 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a663703c-95db-4871-b31c-00951488935d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gbjt5\" (UID: \"a663703c-95db-4871-b31c-00951488935d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761373 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-metrics-tls\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761389 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-service-ca-bundle\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761404 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761419 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6fks\" (UniqueName: \"kubernetes.io/projected/a663703c-95db-4871-b31c-00951488935d-kube-api-access-h6fks\") pod \"cluster-samples-operator-665b6dd947-gbjt5\" (UID: \"a663703c-95db-4871-b31c-00951488935d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761435 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-policies\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761449 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761465 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aa1fc6-da2a-400c-8bfe-022af0ee3707-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761480 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3e149e2-c719-4025-888c-3134dd07b7c4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761496 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-config\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761512 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761539 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761555 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5g25\" (UniqueName: \"kubernetes.io/projected/a3e149e2-c719-4025-888c-3134dd07b7c4-kube-api-access-s5g25\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761572 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds2hj\" (UniqueName: \"kubernetes.io/projected/24aa1fc6-da2a-400c-8bfe-022af0ee3707-kube-api-access-ds2hj\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761586 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761604 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cebe05d8-86f1-4280-9ae0-8065f9c38759-serving-cert\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c7fa5b-e1e9-43c4-9942-409c34ea5660-serving-cert\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-images\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761660 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761679 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761696 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-config\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761715 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aa1fc6-da2a-400c-8bfe-022af0ee3707-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761739 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3e149e2-c719-4025-888c-3134dd07b7c4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761762 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78fc\" (UniqueName: \"kubernetes.io/projected/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-kube-api-access-n78fc\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761786 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-trusted-ca\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761809 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761833 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761851 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761902 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761924 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2m2p\" (UniqueName: \"kubernetes.io/projected/87962440-47ce-4659-a2a7-f00110cc3bd5-kube-api-access-p2m2p\") pod \"dns-operator-744455d44c-znqzp\" (UID: \"87962440-47ce-4659-a2a7-f00110cc3bd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761944 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45565d5-bd55-4e94-8cac-0155e00f1368-serving-cert\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761982 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xrr5\" (UniqueName: \"kubernetes.io/projected/cebe05d8-86f1-4280-9ae0-8065f9c38759-kube-api-access-4xrr5\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.761999 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzbc\" (UniqueName: \"kubernetes.io/projected/87c7fa5b-e1e9-43c4-9942-409c34ea5660-kube-api-access-dwzbc\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762050 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-service-ca-bundle\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762063 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-client-ca\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762128 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-dir\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762163 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762184 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3e149e2-c719-4025-888c-3134dd07b7c4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-config\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762223 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762245 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762262 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-client-ca\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762290 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762293 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-dir\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762332 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762372 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762402 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87962440-47ce-4659-a2a7-f00110cc3bd5-metrics-tls\") pod \"dns-operator-744455d44c-znqzp\" (UID: \"87962440-47ce-4659-a2a7-f00110cc3bd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762427 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5msr4\" (UniqueName: \"kubernetes.io/projected/d45565d5-bd55-4e94-8cac-0155e00f1368-kube-api-access-5msr4\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762451 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wwp\" (UniqueName: \"kubernetes.io/projected/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-kube-api-access-f7wwp\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762477 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56mgj\" (UniqueName: \"kubernetes.io/projected/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-kube-api-access-56mgj\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762502 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2js\" (UniqueName: \"kubernetes.io/projected/f078c2bb-b4ba-42a0-a66c-705c19866fec-kube-api-access-gn2js\") pod \"downloads-7954f5f757-295wb\" (UID: \"f078c2bb-b4ba-42a0-a66c-705c19866fec\") " pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.762544 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.763304 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-client-ca\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.763596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-policies\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.763851 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-client-ca\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.764555 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-config\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.765288 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-config\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.765364 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-trusted-ca\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.765373 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.765395 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c257s"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.765801 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.766700 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cebe05d8-86f1-4280-9ae0-8065f9c38759-serving-cert\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.766801 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-config\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.767616 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.767662 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.768315 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-config\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.768705 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.768943 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45565d5-bd55-4e94-8cac-0155e00f1368-serving-cert\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769086 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769241 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3e149e2-c719-4025-888c-3134dd07b7c4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769379 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cebe05d8-86f1-4280-9ae0-8065f9c38759-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769566 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769578 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24aa1fc6-da2a-400c-8bfe-022af0ee3707-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769613 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769686 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3e149e2-c719-4025-888c-3134dd07b7c4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.769703 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-images\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.770493 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.770644 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.770813 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.774095 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24aa1fc6-da2a-400c-8bfe-022af0ee3707-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.774389 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87962440-47ce-4659-a2a7-f00110cc3bd5-metrics-tls\") pod \"dns-operator-744455d44c-znqzp\" (UID: \"87962440-47ce-4659-a2a7-f00110cc3bd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.774397 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.774457 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.774669 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.774975 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.775151 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-metrics-tls\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.775423 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c7fa5b-e1e9-43c4-9942-409c34ea5660-serving-cert\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.775575 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a663703c-95db-4871-b31c-00951488935d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gbjt5\" (UID: \"a663703c-95db-4871-b31c-00951488935d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.776829 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.781702 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.783728 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.784382 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.786392 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vdrsg"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.787606 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kzlwq"] Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.788565 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.792424 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.812602 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.833259 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.853160 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.873101 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.893336 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.912852 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.932961 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.953063 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.973520 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 09 18:27:03 crc kubenswrapper[4821]: I0309 18:27:03.993626 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.013760 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.034584 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066042 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066095 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5ls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-kube-api-access-gd5ls\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066134 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-tls\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066432 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-audit-policies\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066595 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-bound-sa-token\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066649 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-trusted-ca\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066687 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066709 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-certificates\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066828 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-encryption-config\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.066932 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-serving-cert\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.067073 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.067121 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-audit-dir\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.067229 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjzc\" (UniqueName: \"kubernetes.io/projected/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-kube-api-access-tkjzc\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.067291 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-etcd-client\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.067365 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.067423 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.067865 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.567849838 +0000 UTC m=+161.729225694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.073938 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.093271 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.113029 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.132791 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.154245 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168507 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.168670 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.668639262 +0000 UTC m=+161.830015128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168738 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243465ec-ca31-4ec1-b5ca-1e1318f37c16-config\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168776 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-config\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168814 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttwm\" (UniqueName: \"kubernetes.io/projected/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-kube-api-access-4ttwm\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168847 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-machine-approver-tls\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168877 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7227208b-b4f1-473c-9149-2a1c4d1cab32-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.168955 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdttn\" (UniqueName: \"kubernetes.io/projected/8886c330-bce2-4801-be16-59eeddddaf6f-kube-api-access-kdttn\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169188 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a28f17d7-69dc-4014-a347-a26f55d55ace-service-ca-bundle\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169249 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-csi-data-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169381 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87105c8-2398-44ec-b127-a2e30e767c1d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-images\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169465 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn25z\" (UniqueName: \"kubernetes.io/projected/243465ec-ca31-4ec1-b5ca-1e1318f37c16-kube-api-access-pn25z\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169488 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hc9\" (UniqueName: \"kubernetes.io/projected/1ec71021-0474-49ce-b545-4a973703b42b-kube-api-access-q5hc9\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169511 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcdtr\" (UniqueName: \"kubernetes.io/projected/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-kube-api-access-jcdtr\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169537 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-etcd-client\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169562 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkjzc\" (UniqueName: \"kubernetes.io/projected/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-kube-api-access-tkjzc\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169587 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-proxy-tls\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169610 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169633 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhqk\" (UniqueName: \"kubernetes.io/projected/75d58d1e-e673-4305-9d09-2cfd323769fd-kube-api-access-7vhqk\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169678 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169711 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169749 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-config\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169792 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-srv-cert\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169812 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934a74a7-234e-44f1-bc6e-a13661836b6b-serving-cert\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169831 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169851 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-srv-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.169869 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87105c8-2398-44ec-b127-a2e30e767c1d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170505 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxs8\" (UniqueName: \"kubernetes.io/projected/60628f60-1633-4b77-a457-762d204bab20-kube-api-access-shxs8\") pod \"auto-csr-approver-29551346-phdwt\" (UID: \"60628f60-1633-4b77-a457-762d204bab20\") " pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170556 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-plugins-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170593 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-stats-auth\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170617 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170641 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5b560f-cc32-4a1a-8632-383befaabb5a-config-volume\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170662 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-service-ca\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170683 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-service-ca\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170710 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7227208b-b4f1-473c-9149-2a1c4d1cab32-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.170884 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-trusted-ca-bundle\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.170896 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.670874387 +0000 UTC m=+161.832250283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171056 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-profile-collector-cert\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171108 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1722e733-725b-4985-8365-0f8f3ad0d10d-webhook-cert\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171152 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-tls\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171210 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e5b560f-cc32-4a1a-8632-383befaabb5a-metrics-tls\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171205 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171377 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-audit-policies\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171542 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1722e733-725b-4985-8365-0f8f3ad0d10d-apiservice-cert\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171615 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-oauth-serving-cert\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171595 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.171812 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-socket-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172190 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjz7\" (UniqueName: \"kubernetes.io/projected/934a74a7-234e-44f1-bc6e-a13661836b6b-kube-api-access-9fjz7\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172225 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-audit-policies\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172273 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-registration-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172365 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-config\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172487 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-certificates\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172621 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45vwx\" (UniqueName: \"kubernetes.io/projected/a28f17d7-69dc-4014-a347-a26f55d55ace-kube-api-access-45vwx\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172710 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-serving-cert\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172860 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-serving-cert\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.172932 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-serving-cert\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173045 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2454\" (UniqueName: \"kubernetes.io/projected/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-kube-api-access-z2454\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173104 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8886c330-bce2-4801-be16-59eeddddaf6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173178 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04e01207-4a95-4a32-84df-2d4c69d71fbf-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173315 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173421 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8xb\" (UniqueName: \"kubernetes.io/projected/7dfd5d64-f6dc-40bd-83d1-57e685cd4535-kube-api-access-mx8xb\") pod \"package-server-manager-789f6589d5-ft9v6\" (UID: \"7dfd5d64-f6dc-40bd-83d1-57e685cd4535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173623 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ce9992-733a-4ac6-ab14-610ac4ced250-config\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.173683 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfd5d64-f6dc-40bd-83d1-57e685cd4535-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ft9v6\" (UID: \"7dfd5d64-f6dc-40bd-83d1-57e685cd4535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174298 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174403 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-audit-dir\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174456 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-certs\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174504 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-audit-dir\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174538 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbjb\" (UniqueName: \"kubernetes.io/projected/2e487bc2-9b7d-4845-a026-b27c82e6257a-kube-api-access-lvbjb\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174587 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-oauth-config\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174637 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7227208b-b4f1-473c-9149-2a1c4d1cab32-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174713 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1722e733-725b-4985-8365-0f8f3ad0d10d-tmpfs\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174765 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-secret-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174865 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lg48\" (UniqueName: \"kubernetes.io/projected/04e006fb-bb29-4683-b3a9-a17698564fa6-kube-api-access-9lg48\") pod \"control-plane-machine-set-operator-78cbb6b69f-dbk5m\" (UID: \"04e006fb-bb29-4683-b3a9-a17698564fa6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174898 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243465ec-ca31-4ec1-b5ca-1e1318f37c16-serving-cert\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.174976 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-etcd-client\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175145 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-signing-key\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175394 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-metrics-certs\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175458 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-config\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175513 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-node-bootstrap-token\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175565 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-mountpoint-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175624 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6tm\" (UniqueName: \"kubernetes.io/projected/2528c75b-c6dc-4347-b2e5-8279c1861c53-kube-api-access-bg6tm\") pod \"migrator-59844c95c7-n5glb\" (UID: \"2528c75b-c6dc-4347-b2e5-8279c1861c53\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175667 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04e01207-4a95-4a32-84df-2d4c69d71fbf-ready\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175733 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppn9\" (UniqueName: \"kubernetes.io/projected/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-kube-api-access-rppn9\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175800 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ec71021-0474-49ce-b545-4a973703b42b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175881 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.175981 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-kube-api-access-jlv5x\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176091 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934a74a7-234e-44f1-bc6e-a13661836b6b-config\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176195 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rssgv\" (UniqueName: \"kubernetes.io/projected/d9788fbc-230b-4324-ba04-c706c0278411-kube-api-access-rssgv\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176380 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176445 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcndf\" (UniqueName: \"kubernetes.io/projected/d87105c8-2398-44ec-b127-a2e30e767c1d-kube-api-access-kcndf\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176484 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-certificates\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176524 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpl82\" (UniqueName: \"kubernetes.io/projected/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-kube-api-access-cpl82\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176574 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqbm4\" (UniqueName: \"kubernetes.io/projected/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-kube-api-access-jqbm4\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176622 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8886c330-bce2-4801-be16-59eeddddaf6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176673 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176798 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/934a74a7-234e-44f1-bc6e-a13661836b6b-trusted-ca\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176918 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcjb\" (UniqueName: \"kubernetes.io/projected/04e01207-4a95-4a32-84df-2d4c69d71fbf-kube-api-access-wzcjb\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176964 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-tls\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.176978 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-default-certificate\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177100 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5ls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-kube-api-access-gd5ls\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177141 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-ca\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177163 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-client\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177235 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hv5\" (UniqueName: \"kubernetes.io/projected/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-kube-api-access-54hv5\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177265 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e487bc2-9b7d-4845-a026-b27c82e6257a-cert\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177297 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177347 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-auth-proxy-config\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177407 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjd22\" (UniqueName: \"kubernetes.io/projected/1722e733-725b-4985-8365-0f8f3ad0d10d-kube-api-access-cjd22\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177432 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4ce9992-733a-4ac6-ab14-610ac4ced250-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177493 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9016680a-98b9-4503-a9d6-251355aaecc3-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d9kvs\" (UID: \"9016680a-98b9-4503-a9d6-251355aaecc3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177517 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177562 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-bound-sa-token\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177600 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-trusted-ca\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177622 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2j8\" (UniqueName: \"kubernetes.io/projected/8e5b560f-cc32-4a1a-8632-383befaabb5a-kube-api-access-wn2j8\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177647 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.177674 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e006fb-bb29-4683-b3a9-a17698564fa6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dbk5m\" (UID: \"04e006fb-bb29-4683-b3a9-a17698564fa6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.179090 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-encryption-config\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.179491 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-signing-cabundle\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.179928 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2m5\" (UniqueName: \"kubernetes.io/projected/9016680a-98b9-4503-a9d6-251355aaecc3-kube-api-access-vg2m5\") pod \"multus-admission-controller-857f4d67dd-d9kvs\" (UID: \"9016680a-98b9-4503-a9d6-251355aaecc3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.180116 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ec71021-0474-49ce-b545-4a973703b42b-proxy-tls\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.180203 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6n5\" (UniqueName: \"kubernetes.io/projected/43d18118-9a44-4b09-add9-7df52470e1c7-kube-api-access-4h6n5\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.180307 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ce9992-733a-4ac6-ab14-610ac4ced250-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.181003 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-trusted-ca\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.184640 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-encryption-config\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.187687 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.189975 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.192954 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-serving-cert\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.214387 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.218204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd2br\" (UniqueName: \"kubernetes.io/projected/ea3fa689-2665-423f-b717-f2e279be3831-kube-api-access-fd2br\") pod \"apiserver-76f77b778f-m6q6r\" (UID: \"ea3fa689-2665-423f-b717-f2e279be3831\") " pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.233795 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.253945 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.273726 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.281413 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.281666 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.781611691 +0000 UTC m=+161.942987597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.281823 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282019 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn2j8\" (UniqueName: \"kubernetes.io/projected/8e5b560f-cc32-4a1a-8632-383befaabb5a-kube-api-access-wn2j8\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282137 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e006fb-bb29-4683-b3a9-a17698564fa6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dbk5m\" (UID: \"04e006fb-bb29-4683-b3a9-a17698564fa6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-signing-cabundle\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282399 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2m5\" (UniqueName: \"kubernetes.io/projected/9016680a-98b9-4503-a9d6-251355aaecc3-kube-api-access-vg2m5\") pod \"multus-admission-controller-857f4d67dd-d9kvs\" (UID: \"9016680a-98b9-4503-a9d6-251355aaecc3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282455 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ce9992-733a-4ac6-ab14-610ac4ced250-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282511 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ec71021-0474-49ce-b545-4a973703b42b-proxy-tls\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282561 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6n5\" (UniqueName: \"kubernetes.io/projected/43d18118-9a44-4b09-add9-7df52470e1c7-kube-api-access-4h6n5\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282623 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243465ec-ca31-4ec1-b5ca-1e1318f37c16-config\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282671 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-config\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ttwm\" (UniqueName: \"kubernetes.io/projected/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-kube-api-access-4ttwm\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282778 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-machine-approver-tls\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282831 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7227208b-b4f1-473c-9149-2a1c4d1cab32-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282900 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdttn\" (UniqueName: \"kubernetes.io/projected/8886c330-bce2-4801-be16-59eeddddaf6f-kube-api-access-kdttn\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.282953 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a28f17d7-69dc-4014-a347-a26f55d55ace-service-ca-bundle\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283001 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-csi-data-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283053 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87105c8-2398-44ec-b127-a2e30e767c1d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283120 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-images\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283167 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn25z\" (UniqueName: \"kubernetes.io/projected/243465ec-ca31-4ec1-b5ca-1e1318f37c16-kube-api-access-pn25z\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283208 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5hc9\" (UniqueName: \"kubernetes.io/projected/1ec71021-0474-49ce-b545-4a973703b42b-kube-api-access-q5hc9\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283254 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcdtr\" (UniqueName: \"kubernetes.io/projected/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-kube-api-access-jcdtr\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283285 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-csi-data-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283304 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-proxy-tls\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283475 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283528 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhqk\" (UniqueName: \"kubernetes.io/projected/75d58d1e-e673-4305-9d09-2cfd323769fd-kube-api-access-7vhqk\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283603 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283659 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-config\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-srv-cert\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283738 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934a74a7-234e-44f1-bc6e-a13661836b6b-serving-cert\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283772 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283815 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-srv-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283862 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87105c8-2398-44ec-b127-a2e30e767c1d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283911 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxs8\" (UniqueName: \"kubernetes.io/projected/60628f60-1633-4b77-a457-762d204bab20-kube-api-access-shxs8\") pod \"auto-csr-approver-29551346-phdwt\" (UID: \"60628f60-1633-4b77-a457-762d204bab20\") " pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.283963 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-plugins-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284011 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-stats-auth\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284051 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5b560f-cc32-4a1a-8632-383befaabb5a-config-volume\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284084 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-service-ca\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284120 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-service-ca\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284156 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7227208b-b4f1-473c-9149-2a1c4d1cab32-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284421 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1722e733-725b-4985-8365-0f8f3ad0d10d-webhook-cert\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284453 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-trusted-ca-bundle\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284496 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-profile-collector-cert\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284547 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-oauth-serving-cert\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284594 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e5b560f-cc32-4a1a-8632-383befaabb5a-metrics-tls\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284642 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1722e733-725b-4985-8365-0f8f3ad0d10d-apiservice-cert\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284693 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-socket-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284788 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjz7\" (UniqueName: \"kubernetes.io/projected/934a74a7-234e-44f1-bc6e-a13661836b6b-kube-api-access-9fjz7\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284837 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-registration-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284886 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45vwx\" (UniqueName: \"kubernetes.io/projected/a28f17d7-69dc-4014-a347-a26f55d55ace-kube-api-access-45vwx\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284932 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-config\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-serving-cert\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285039 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-serving-cert\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285130 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2454\" (UniqueName: \"kubernetes.io/projected/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-kube-api-access-z2454\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284489 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87105c8-2398-44ec-b127-a2e30e767c1d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285255 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-socket-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285178 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8886c330-bce2-4801-be16-59eeddddaf6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285396 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04e01207-4a95-4a32-84df-2d4c69d71fbf-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285441 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285463 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfd5d64-f6dc-40bd-83d1-57e685cd4535-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ft9v6\" (UID: \"7dfd5d64-f6dc-40bd-83d1-57e685cd4535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285502 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8xb\" (UniqueName: \"kubernetes.io/projected/7dfd5d64-f6dc-40bd-83d1-57e685cd4535-kube-api-access-mx8xb\") pod \"package-server-manager-789f6589d5-ft9v6\" (UID: \"7dfd5d64-f6dc-40bd-83d1-57e685cd4535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ce9992-733a-4ac6-ab14-610ac4ced250-config\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285555 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-certs\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285595 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-oauth-config\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbjb\" (UniqueName: \"kubernetes.io/projected/2e487bc2-9b7d-4845-a026-b27c82e6257a-kube-api-access-lvbjb\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285638 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7227208b-b4f1-473c-9149-2a1c4d1cab32-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285657 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-registration-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285677 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1722e733-725b-4985-8365-0f8f3ad0d10d-tmpfs\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285694 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-secret-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.284832 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a28f17d7-69dc-4014-a347-a26f55d55ace-service-ca-bundle\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285714 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lg48\" (UniqueName: \"kubernetes.io/projected/04e006fb-bb29-4683-b3a9-a17698564fa6-kube-api-access-9lg48\") pod \"control-plane-machine-set-operator-78cbb6b69f-dbk5m\" (UID: \"04e006fb-bb29-4683-b3a9-a17698564fa6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285730 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-service-ca\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285759 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243465ec-ca31-4ec1-b5ca-1e1318f37c16-serving-cert\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285782 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04e01207-4a95-4a32-84df-2d4c69d71fbf-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285842 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-signing-key\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285882 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-metrics-certs\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285918 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-config\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285955 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-node-bootstrap-token\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.285994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-mountpoint-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286035 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg6tm\" (UniqueName: \"kubernetes.io/projected/2528c75b-c6dc-4347-b2e5-8279c1861c53-kube-api-access-bg6tm\") pod \"migrator-59844c95c7-n5glb\" (UID: \"2528c75b-c6dc-4347-b2e5-8279c1861c53\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286069 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286104 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04e01207-4a95-4a32-84df-2d4c69d71fbf-ready\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286208 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppn9\" (UniqueName: \"kubernetes.io/projected/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-kube-api-access-rppn9\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286244 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ec71021-0474-49ce-b545-4a973703b42b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-kube-api-access-jlv5x\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286371 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934a74a7-234e-44f1-bc6e-a13661836b6b-config\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286405 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rssgv\" (UniqueName: \"kubernetes.io/projected/d9788fbc-230b-4324-ba04-c706c0278411-kube-api-access-rssgv\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286446 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286482 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcndf\" (UniqueName: \"kubernetes.io/projected/d87105c8-2398-44ec-b127-a2e30e767c1d-kube-api-access-kcndf\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286527 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpl82\" (UniqueName: \"kubernetes.io/projected/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-kube-api-access-cpl82\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286562 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqbm4\" (UniqueName: \"kubernetes.io/projected/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-kube-api-access-jqbm4\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286601 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8886c330-bce2-4801-be16-59eeddddaf6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286633 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/934a74a7-234e-44f1-bc6e-a13661836b6b-trusted-ca\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286666 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286728 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcjb\" (UniqueName: \"kubernetes.io/projected/04e01207-4a95-4a32-84df-2d4c69d71fbf-kube-api-access-wzcjb\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286767 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-default-certificate\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286822 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-ca\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.286841 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.786817611 +0000 UTC m=+161.948193507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286883 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-client\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.286967 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hv5\" (UniqueName: \"kubernetes.io/projected/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-kube-api-access-54hv5\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287004 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e487bc2-9b7d-4845-a026-b27c82e6257a-cert\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287054 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287104 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-auth-proxy-config\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287138 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9016680a-98b9-4503-a9d6-251355aaecc3-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d9kvs\" (UID: \"9016680a-98b9-4503-a9d6-251355aaecc3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287176 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjd22\" (UniqueName: \"kubernetes.io/projected/1722e733-725b-4985-8365-0f8f3ad0d10d-kube-api-access-cjd22\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287212 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4ce9992-733a-4ac6-ab14-610ac4ced250-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287402 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-plugins-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.287792 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-ca\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.288037 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-oauth-serving-cert\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.288835 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-config\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.289268 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/75d58d1e-e673-4305-9d09-2cfd323769fd-mountpoint-dir\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.290048 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-trusted-ca-bundle\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.290111 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8886c330-bce2-4801-be16-59eeddddaf6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.290838 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1722e733-725b-4985-8365-0f8f3ad0d10d-tmpfs\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.291227 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1ec71021-0474-49ce-b545-4a973703b42b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.291434 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04e01207-4a95-4a32-84df-2d4c69d71fbf-ready\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.292226 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-service-ca\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.293825 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-config\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.294534 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-serving-cert\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.294744 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-stats-auth\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.294760 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4ce9992-733a-4ac6-ab14-610ac4ced250-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.294939 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.294965 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-profile-collector-cert\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.295279 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-metrics-certs\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.296000 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-etcd-client\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.297288 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-secret-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.297696 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-serving-cert\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.297840 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.298132 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-srv-cert\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.298608 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a28f17d7-69dc-4014-a347-a26f55d55ace-default-certificate\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.298624 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.298725 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-oauth-config\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.298795 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87105c8-2398-44ec-b127-a2e30e767c1d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.313531 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.326785 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-machine-approver-tls\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.334153 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.353446 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.373501 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.388788 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ec71021-0474-49ce-b545-4a973703b42b-proxy-tls\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.391832 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.392003 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.891982333 +0000 UTC m=+162.053358199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.392085 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.392641 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.892630631 +0000 UTC m=+162.054006497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.395279 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.404273 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9016680a-98b9-4503-a9d6-251355aaecc3-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d9kvs\" (UID: \"9016680a-98b9-4503-a9d6-251355aaecc3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.413658 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.425511 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.430424 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.433203 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.453936 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.473352 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.494855 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.494904 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.495286 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.995235248 +0000 UTC m=+162.156611114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.495869 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.496388 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:04.996370261 +0000 UTC m=+162.157746127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.501580 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8886c330-bce2-4801-be16-59eeddddaf6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.514902 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.533601 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.536720 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4ce9992-733a-4ac6-ab14-610ac4ced250-config\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.562449 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.574334 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.594416 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.596977 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.597175 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.097143074 +0000 UTC m=+162.258518940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.597615 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.598243 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.098224976 +0000 UTC m=+162.259600842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.604504 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-config\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.613018 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.633370 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.662432 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.666633 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.671399 4821 request.go:700] Waited for 1.001939554s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.672667 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.679880 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-auth-proxy-config\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.688396 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m6q6r"] Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.695531 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.697215 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-config\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.699068 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.699238 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.199194205 +0000 UTC m=+162.360570101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.699890 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.700376 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.200350138 +0000 UTC m=+162.361726034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.714082 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.732996 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.752514 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.774666 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.797279 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.801629 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.801807 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.30178297 +0000 UTC m=+162.463158836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.802886 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.803794 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.303769758 +0000 UTC m=+162.465145644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.811734 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7227208b-b4f1-473c-9149-2a1c4d1cab32-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.813707 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.823947 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7227208b-b4f1-473c-9149-2a1c4d1cab32-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.834743 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.841440 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dfd5d64-f6dc-40bd-83d1-57e685cd4535-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ft9v6\" (UID: \"7dfd5d64-f6dc-40bd-83d1-57e685cd4535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.853951 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.861618 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934a74a7-234e-44f1-bc6e-a13661836b6b-serving-cert\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.874145 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.904165 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.905495 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:04 crc kubenswrapper[4821]: E0309 18:27:04.906820 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.406804828 +0000 UTC m=+162.568180684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.915135 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/934a74a7-234e-44f1-bc6e-a13661836b6b-trusted-ca\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.915917 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.933577 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.953388 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.961035 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934a74a7-234e-44f1-bc6e-a13661836b6b-config\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.973151 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.974432 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-images\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:04 crc kubenswrapper[4821]: I0309 18:27:04.993348 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.033703 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.034698 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.534675889 +0000 UTC m=+162.696051775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.036308 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.036395 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.048762 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-proxy-tls\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.049457 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e006fb-bb29-4683-b3a9-a17698564fa6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dbk5m\" (UID: \"04e006fb-bb29-4683-b3a9-a17698564fa6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.053315 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.073927 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.081021 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1722e733-725b-4985-8365-0f8f3ad0d10d-apiservice-cert\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.082456 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1722e733-725b-4985-8365-0f8f3ad0d10d-webhook-cert\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.093679 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.100216 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243465ec-ca31-4ec1-b5ca-1e1318f37c16-serving-cert\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.113358 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.133549 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.134862 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.135189 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.635157424 +0000 UTC m=+162.796533310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.135358 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.135816 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.635794353 +0000 UTC m=+162.797170249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.152820 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.153888 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243465ec-ca31-4ec1-b5ca-1e1318f37c16-config\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.173447 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.193064 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.205277 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-signing-cabundle\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.212761 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.233526 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.237100 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.237280 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.737249506 +0000 UTC m=+162.898625392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.237375 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.237895 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.737878254 +0000 UTC m=+162.899254150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.253248 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.262692 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-signing-key\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.274229 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.276187 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea3fa689-2665-423f-b717-f2e279be3831" containerID="bb7843b2fb60f97f750b76066fb9fed8f9f6f1710d4ca3a7c8c377d30fc1f558" exitCode=0 Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.276244 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" event={"ID":"ea3fa689-2665-423f-b717-f2e279be3831","Type":"ContainerDied","Data":"bb7843b2fb60f97f750b76066fb9fed8f9f6f1710d4ca3a7c8c377d30fc1f558"} Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.276711 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" event={"ID":"ea3fa689-2665-423f-b717-f2e279be3831","Type":"ContainerStarted","Data":"637b8b684f8783c2c7b322bd005772dea2c64c2653dbc28a33468c423c83bdcc"} Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.286685 4821 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.286701 4821 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.286776 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e5b560f-cc32-4a1a-8632-383befaabb5a-config-volume podName:8e5b560f-cc32-4a1a-8632-383befaabb5a nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.786751566 +0000 UTC m=+162.948127462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8e5b560f-cc32-4a1a-8632-383befaabb5a-config-volume") pod "dns-default-c257s" (UID: "8e5b560f-cc32-4a1a-8632-383befaabb5a") : failed to sync configmap cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.286803 4821 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.286890 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e5b560f-cc32-4a1a-8632-383befaabb5a-metrics-tls podName:8e5b560f-cc32-4a1a-8632-383befaabb5a nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.786822768 +0000 UTC m=+162.948198654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8e5b560f-cc32-4a1a-8632-383befaabb5a-metrics-tls") pod "dns-default-c257s" (UID: "8e5b560f-cc32-4a1a-8632-383befaabb5a") : failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.286926 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-certs podName:d9788fbc-230b-4324-ba04-c706c0278411 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.786909301 +0000 UTC m=+162.948285197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-certs") pod "machine-config-server-sst54" (UID: "d9788fbc-230b-4324-ba04-c706c0278411") : failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.287963 4821 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288047 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-srv-cert podName:43d18118-9a44-4b09-add9-7df52470e1c7 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.788027894 +0000 UTC m=+162.949403780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-srv-cert") pod "olm-operator-6b444d44fb-x76nr" (UID: "43d18118-9a44-4b09-add9-7df52470e1c7") : failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288281 4821 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288387 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist podName:04e01207-4a95-4a32-84df-2d4c69d71fbf nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.788365143 +0000 UTC m=+162.949741039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-kzlwq" (UID: "04e01207-4a95-4a32-84df-2d4c69d71fbf") : failed to sync configmap cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288555 4821 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288750 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume podName:aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.788728604 +0000 UTC m=+162.950104500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume") pod "collect-profiles-29551335-b9jvf" (UID: "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c") : failed to sync configmap cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288884 4821 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.288955 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-node-bootstrap-token podName:d9788fbc-230b-4324-ba04-c706c0278411 nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.78893758 +0000 UTC m=+162.950313466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-node-bootstrap-token") pod "machine-config-server-sst54" (UID: "d9788fbc-230b-4324-ba04-c706c0278411") : failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.290484 4821 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.290592 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e487bc2-9b7d-4845-a026-b27c82e6257a-cert podName:2e487bc2-9b7d-4845-a026-b27c82e6257a nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.790561568 +0000 UTC m=+162.951937524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2e487bc2-9b7d-4845-a026-b27c82e6257a-cert") pod "ingress-canary-djphk" (UID: "2e487bc2-9b7d-4845-a026-b27c82e6257a") : failed to sync secret cache: timed out waiting for the condition Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.293180 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.313514 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.333075 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.339585 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.339738 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.839703427 +0000 UTC m=+163.001079323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.340287 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.340718 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.840700257 +0000 UTC m=+163.002076153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.353477 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.375252 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.393884 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.414052 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.434524 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.441446 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.441699 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.941667866 +0000 UTC m=+163.103043762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.442061 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.442692 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:05.942672335 +0000 UTC m=+163.104048231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.452719 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.473138 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.493932 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.513691 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.532783 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.545109 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.545310 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.045274682 +0000 UTC m=+163.206650578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.546213 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.546759 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.046731783 +0000 UTC m=+163.208107679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.553611 4821 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.574273 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.594121 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.613916 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.635020 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.647556 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.647819 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.147795705 +0000 UTC m=+163.309171581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.648148 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.648576 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.148559758 +0000 UTC m=+163.309935624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.670818 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.672004 4821 request.go:700] Waited for 1.909564282s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.688294 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78fc\" (UniqueName: \"kubernetes.io/projected/b6b5dbe9-77c4-4cd4-b639-ade5dff8134c-kube-api-access-n78fc\") pod \"ingress-operator-5b745b69d9-7fdbf\" (UID: \"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.715061 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6fks\" (UniqueName: \"kubernetes.io/projected/a663703c-95db-4871-b31c-00951488935d-kube-api-access-h6fks\") pod \"cluster-samples-operator-665b6dd947-gbjt5\" (UID: \"a663703c-95db-4871-b31c-00951488935d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.731542 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2js\" (UniqueName: \"kubernetes.io/projected/f078c2bb-b4ba-42a0-a66c-705c19866fec-kube-api-access-gn2js\") pod \"downloads-7954f5f757-295wb\" (UID: \"f078c2bb-b4ba-42a0-a66c-705c19866fec\") " pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.747809 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.749241 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.749592 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.249563448 +0000 UTC m=+163.410939334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.749676 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.750824 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.250796863 +0000 UTC m=+163.412172789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.755143 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9bk\" (UniqueName: \"kubernetes.io/projected/6d35d28f-2377-46c5-95aa-ea3bf280a60e-kube-api-access-bf9bk\") pod \"oauth-openshift-558db77b4-tnl4x\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.774446 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5msr4\" (UniqueName: \"kubernetes.io/projected/d45565d5-bd55-4e94-8cac-0155e00f1368-kube-api-access-5msr4\") pod \"route-controller-manager-6576b87f9c-qqsgs\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.792134 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wwp\" (UniqueName: \"kubernetes.io/projected/a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4-kube-api-access-f7wwp\") pod \"machine-api-operator-5694c8668f-h8j2t\" (UID: \"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.797548 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.822814 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56mgj\" (UniqueName: \"kubernetes.io/projected/a423d95a-7bd6-483e-ba23-28e8f1a3ec92-kube-api-access-56mgj\") pod \"openshift-controller-manager-operator-756b6f6bc6-9pcrm\" (UID: \"a423d95a-7bd6-483e-ba23-28e8f1a3ec92\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.840509 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2m2p\" (UniqueName: \"kubernetes.io/projected/87962440-47ce-4659-a2a7-f00110cc3bd5-kube-api-access-p2m2p\") pod \"dns-operator-744455d44c-znqzp\" (UID: \"87962440-47ce-4659-a2a7-f00110cc3bd5\") " pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.846162 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851026 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851287 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851541 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851572 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-srv-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851607 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5b560f-cc32-4a1a-8632-383befaabb5a-config-volume\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851640 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e5b560f-cc32-4a1a-8632-383befaabb5a-metrics-tls\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851745 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-certs\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851826 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-node-bootstrap-token\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851970 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e487bc2-9b7d-4845-a026-b27c82e6257a-cert\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.851992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.852101 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.352078041 +0000 UTC m=+163.513453957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.852258 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.853974 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a3e149e2-c719-4025-888c-3134dd07b7c4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.854119 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.35410865 +0000 UTC m=+163.515484506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.860739 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.860836 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.860868 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/43d18118-9a44-4b09-add9-7df52470e1c7-srv-cert\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.861460 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e5b560f-cc32-4a1a-8632-383befaabb5a-config-volume\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.861838 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-certs\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.871230 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2e487bc2-9b7d-4845-a026-b27c82e6257a-cert\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.873162 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d9788fbc-230b-4324-ba04-c706c0278411-node-bootstrap-token\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.873179 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e5b560f-cc32-4a1a-8632-383befaabb5a-metrics-tls\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.882514 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5g25\" (UniqueName: \"kubernetes.io/projected/a3e149e2-c719-4025-888c-3134dd07b7c4-kube-api-access-s5g25\") pod \"cluster-image-registry-operator-dc59b4c8b-66c4m\" (UID: \"a3e149e2-c719-4025-888c-3134dd07b7c4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.891651 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xrr5\" (UniqueName: \"kubernetes.io/projected/cebe05d8-86f1-4280-9ae0-8065f9c38759-kube-api-access-4xrr5\") pod \"authentication-operator-69f744f599-jrr9g\" (UID: \"cebe05d8-86f1-4280-9ae0-8065f9c38759\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.903133 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.909259 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.909532 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds2hj\" (UniqueName: \"kubernetes.io/projected/24aa1fc6-da2a-400c-8bfe-022af0ee3707-kube-api-access-ds2hj\") pod \"openshift-apiserver-operator-796bbdcf4f-vlwv5\" (UID: \"24aa1fc6-da2a-400c-8bfe-022af0ee3707\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.915140 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.930990 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzbc\" (UniqueName: \"kubernetes.io/projected/87c7fa5b-e1e9-43c4-9942-409c34ea5660-kube-api-access-dwzbc\") pod \"controller-manager-879f6c89f-vmzvr\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.933744 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.935391 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-295wb"] Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.944362 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.955336 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:05 crc kubenswrapper[4821]: E0309 18:27:05.955845 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.455818661 +0000 UTC m=+163.617194567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:05 crc kubenswrapper[4821]: I0309 18:27:05.970846 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkjzc\" (UniqueName: \"kubernetes.io/projected/aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6-kube-api-access-tkjzc\") pod \"apiserver-7bbb656c7d-bsmz7\" (UID: \"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.005215 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.012909 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5ls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-kube-api-access-gd5ls\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.017299 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.034538 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-bound-sa-token\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.049982 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xxsvf\" (UID: \"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.057225 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.057554 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.557541262 +0000 UTC m=+163.718917118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.058007 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.081023 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn2j8\" (UniqueName: \"kubernetes.io/projected/8e5b560f-cc32-4a1a-8632-383befaabb5a-kube-api-access-wn2j8\") pod \"dns-default-c257s\" (UID: \"8e5b560f-cc32-4a1a-8632-383befaabb5a\") " pod="openshift-dns/dns-default-c257s" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.088904 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2m5\" (UniqueName: \"kubernetes.io/projected/9016680a-98b9-4503-a9d6-251355aaecc3-kube-api-access-vg2m5\") pod \"multus-admission-controller-857f4d67dd-d9kvs\" (UID: \"9016680a-98b9-4503-a9d6-251355aaecc3\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.106761 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6n5\" (UniqueName: \"kubernetes.io/projected/43d18118-9a44-4b09-add9-7df52470e1c7-kube-api-access-4h6n5\") pod \"olm-operator-6b444d44fb-x76nr\" (UID: \"43d18118-9a44-4b09-add9-7df52470e1c7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.111381 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.124851 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.132607 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ttwm\" (UniqueName: \"kubernetes.io/projected/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-kube-api-access-4ttwm\") pod \"marketplace-operator-79b997595-7nw2x\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.147818 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-c257s" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.150841 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdttn\" (UniqueName: \"kubernetes.io/projected/8886c330-bce2-4801-be16-59eeddddaf6f-kube-api-access-kdttn\") pod \"openshift-config-operator-7777fb866f-l97hl\" (UID: \"8886c330-bce2-4801-be16-59eeddddaf6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.158255 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.158989 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.658972994 +0000 UTC m=+163.820348850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.174393 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn25z\" (UniqueName: \"kubernetes.io/projected/243465ec-ca31-4ec1-b5ca-1e1318f37c16-kube-api-access-pn25z\") pod \"service-ca-operator-777779d784-hh9sf\" (UID: \"243465ec-ca31-4ec1-b5ca-1e1318f37c16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.196062 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5hc9\" (UniqueName: \"kubernetes.io/projected/1ec71021-0474-49ce-b545-4a973703b42b-kube-api-access-q5hc9\") pod \"machine-config-controller-84d6567774-ppt7p\" (UID: \"1ec71021-0474-49ce-b545-4a973703b42b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.210006 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.211334 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcdtr\" (UniqueName: \"kubernetes.io/projected/1b7ef6fd-c836-460d-bac0-ac2135ad77a2-kube-api-access-jcdtr\") pod \"machine-approver-56656f9798-nprxk\" (UID: \"1b7ef6fd-c836-460d-bac0-ac2135ad77a2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.229368 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhqk\" (UniqueName: \"kubernetes.io/projected/75d58d1e-e673-4305-9d09-2cfd323769fd-kube-api-access-7vhqk\") pod \"csi-hostpathplugin-vdrsg\" (UID: \"75d58d1e-e673-4305-9d09-2cfd323769fd\") " pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.252822 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjz7\" (UniqueName: \"kubernetes.io/projected/934a74a7-234e-44f1-bc6e-a13661836b6b-kube-api-access-9fjz7\") pod \"console-operator-58897d9998-gg4ds\" (UID: \"934a74a7-234e-44f1-bc6e-a13661836b6b\") " pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.257955 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.260012 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.260491 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.760474788 +0000 UTC m=+163.921850644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.269630 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.270591 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8xb\" (UniqueName: \"kubernetes.io/projected/7dfd5d64-f6dc-40bd-83d1-57e685cd4535-kube-api-access-mx8xb\") pod \"package-server-manager-789f6589d5-ft9v6\" (UID: \"7dfd5d64-f6dc-40bd-83d1-57e685cd4535\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.284376 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.289521 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.297097 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45vwx\" (UniqueName: \"kubernetes.io/projected/a28f17d7-69dc-4014-a347-a26f55d55ace-kube-api-access-45vwx\") pod \"router-default-5444994796-4ntmx\" (UID: \"a28f17d7-69dc-4014-a347-a26f55d55ace\") " pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.297388 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.300857 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" event={"ID":"ea3fa689-2665-423f-b717-f2e279be3831","Type":"ContainerStarted","Data":"31ef819031b56f74c54522d0e04125b14fe65bfb0fcc2d480f935e991622b377"} Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.300902 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" event={"ID":"ea3fa689-2665-423f-b717-f2e279be3831","Type":"ContainerStarted","Data":"ad3886f305bcf8be9f299f353e564e9e181328a4254d165bb9be2b954b0712b7"} Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.305131 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" event={"ID":"d45565d5-bd55-4e94-8cac-0155e00f1368","Type":"ContainerStarted","Data":"ac8dc429fe1ac6ddc16ea17005d1a84acdff6444a78c53dea99b7b58ccc69236"} Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.305369 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.311068 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxs8\" (UniqueName: \"kubernetes.io/projected/60628f60-1633-4b77-a457-762d204bab20-kube-api-access-shxs8\") pod \"auto-csr-approver-29551346-phdwt\" (UID: \"60628f60-1633-4b77-a457-762d204bab20\") " pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.311224 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-295wb" event={"ID":"f078c2bb-b4ba-42a0-a66c-705c19866fec","Type":"ContainerStarted","Data":"c0742552d0a60bba8b7e07362cccfb7de3535a992975201ea9dc61c60c355530"} Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.311269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-295wb" event={"ID":"f078c2bb-b4ba-42a0-a66c-705c19866fec","Type":"ContainerStarted","Data":"1d140112865d7278bf628eaf0129980ccef87edc8b472a15fae0735ea16a6e1f"} Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.312411 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.312741 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.313658 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-295wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.313724 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-295wb" podUID="f078c2bb-b4ba-42a0-a66c-705c19866fec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.314194 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" event={"ID":"a423d95a-7bd6-483e-ba23-28e8f1a3ec92","Type":"ContainerStarted","Data":"fe20f4ef4e16f66fcd4f4557326eda614b37716977b8fd8a8889a3985f1248e0"} Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.324677 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tnl4x"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.326223 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.327709 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.332393 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2454\" (UniqueName: \"kubernetes.io/projected/f080ed0b-402b-4ed1-89cd-5ee1af2b9735-kube-api-access-z2454\") pod \"etcd-operator-b45778765-sh5rp\" (UID: \"f080ed0b-402b-4ed1-89cd-5ee1af2b9735\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.335808 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.349728 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-kube-api-access-jlv5x\") pod \"console-f9d7485db-x9nnw\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.361049 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.361225 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.86119909 +0000 UTC m=+164.022574946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.361280 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.361687 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.861675834 +0000 UTC m=+164.023051690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.365566 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.371393 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg6tm\" (UniqueName: \"kubernetes.io/projected/2528c75b-c6dc-4347-b2e5-8279c1861c53-kube-api-access-bg6tm\") pod \"migrator-59844c95c7-n5glb\" (UID: \"2528c75b-c6dc-4347-b2e5-8279c1861c53\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.387888 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.390721 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.394230 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbjb\" (UniqueName: \"kubernetes.io/projected/2e487bc2-9b7d-4845-a026-b27c82e6257a-kube-api-access-lvbjb\") pod \"ingress-canary-djphk\" (UID: \"2e487bc2-9b7d-4845-a026-b27c82e6257a\") " pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.420425 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.427591 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7227208b-b4f1-473c-9149-2a1c4d1cab32-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-46tk5\" (UID: \"7227208b-b4f1-473c-9149-2a1c4d1cab32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.427770 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hv5\" (UniqueName: \"kubernetes.io/projected/5275d8b9-8874-4c24-96b9-fdef4ef32d9b-kube-api-access-54hv5\") pod \"machine-config-operator-74547568cd-5ncxl\" (UID: \"5275d8b9-8874-4c24-96b9-fdef4ef32d9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.429514 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-djphk" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.438510 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.452967 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.458236 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjd22\" (UniqueName: \"kubernetes.io/projected/1722e733-725b-4985-8365-0f8f3ad0d10d-kube-api-access-cjd22\") pod \"packageserver-d55dfcdfc-2lkq2\" (UID: \"1722e733-725b-4985-8365-0f8f3ad0d10d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.462496 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.463692 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:06.963673833 +0000 UTC m=+164.125049689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.477207 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppn9\" (UniqueName: \"kubernetes.io/projected/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-kube-api-access-rppn9\") pod \"collect-profiles-29551335-b9jvf\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:06 crc kubenswrapper[4821]: W0309 18:27:06.487239 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3e149e2_c719_4025_888c_3134dd07b7c4.slice/crio-008bc91d838ad3c8f56f787c05673dcedbad659db4ba76378a4a4be86616ae46 WatchSource:0}: Error finding container 008bc91d838ad3c8f56f787c05673dcedbad659db4ba76378a4a4be86616ae46: Status 404 returned error can't find the container with id 008bc91d838ad3c8f56f787c05673dcedbad659db4ba76378a4a4be86616ae46 Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.487976 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lg48\" (UniqueName: \"kubernetes.io/projected/04e006fb-bb29-4683-b3a9-a17698564fa6-kube-api-access-9lg48\") pod \"control-plane-machine-set-operator-78cbb6b69f-dbk5m\" (UID: \"04e006fb-bb29-4683-b3a9-a17698564fa6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.499311 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h8j2t"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.509413 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rssgv\" (UniqueName: \"kubernetes.io/projected/d9788fbc-230b-4324-ba04-c706c0278411-kube-api-access-rssgv\") pod \"machine-config-server-sst54\" (UID: \"d9788fbc-230b-4324-ba04-c706c0278411\") " pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.516062 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-znqzp"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.536848 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.539935 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcndf\" (UniqueName: \"kubernetes.io/projected/d87105c8-2398-44ec-b127-a2e30e767c1d-kube-api-access-kcndf\") pod \"kube-storage-version-migrator-operator-b67b599dd-gk67k\" (UID: \"d87105c8-2398-44ec-b127-a2e30e767c1d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.542688 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.565375 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqbm4\" (UniqueName: \"kubernetes.io/projected/ba8c991f-dcb9-4206-ad42-dedc0f6d04cb-kube-api-access-jqbm4\") pod \"service-ca-9c57cc56f-4pwqq\" (UID: \"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.565981 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.566462 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.567542 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.567571 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.567719 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.567853 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.067841055 +0000 UTC m=+164.229216911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.585117 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpl82\" (UniqueName: \"kubernetes.io/projected/d2d34b0b-073c-47cb-9c2c-e2863dc06c23-kube-api-access-cpl82\") pod \"catalog-operator-68c6474976-jrc42\" (UID: \"d2d34b0b-073c-47cb-9c2c-e2863dc06c23\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.589127 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jrr9g"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.602729 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcjb\" (UniqueName: \"kubernetes.io/projected/04e01207-4a95-4a32-84df-2d4c69d71fbf-kube-api-access-wzcjb\") pod \"cni-sysctl-allowlist-ds-kzlwq\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.609653 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nw2x"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.610240 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4ce9992-733a-4ac6-ab14-610ac4ced250-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ctxcr\" (UID: \"e4ce9992-733a-4ac6-ab14-610ac4ced250\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.620487 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.643022 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.643337 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vmzvr"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.651561 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.651862 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.658680 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.664417 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-c257s"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.668402 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.668584 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.168563667 +0000 UTC m=+164.329939523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.668729 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.669043 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.16903376 +0000 UTC m=+164.330409616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.671911 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.680087 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.696576 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sst54" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.770000 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.770456 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.270306308 +0000 UTC m=+164.431682164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.776456 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.797513 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.829484 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.840048 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l97hl"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.861133 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf"] Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.871500 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.872237 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.372226094 +0000 UTC m=+164.533601950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.879052 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" Mar 09 18:27:06 crc kubenswrapper[4821]: W0309 18:27:06.916721 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec71021_0474_49ce_b545_4a973703b42b.slice/crio-6498972589c9231715c75b96e3d0656e03e24a5d0f5cbc136b4ed9dad10e5c9a WatchSource:0}: Error finding container 6498972589c9231715c75b96e3d0656e03e24a5d0f5cbc136b4ed9dad10e5c9a: Status 404 returned error can't find the container with id 6498972589c9231715c75b96e3d0656e03e24a5d0f5cbc136b4ed9dad10e5c9a Mar 09 18:27:06 crc kubenswrapper[4821]: W0309 18:27:06.970898 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8886c330_bce2_4801_be16_59eeddddaf6f.slice/crio-aa54b368b1b01bc290a393f827c034bb87e4ed830ef5c5cd2179ef4dbd027aee WatchSource:0}: Error finding container aa54b368b1b01bc290a393f827c034bb87e4ed830ef5c5cd2179ef4dbd027aee: Status 404 returned error can't find the container with id aa54b368b1b01bc290a393f827c034bb87e4ed830ef5c5cd2179ef4dbd027aee Mar 09 18:27:06 crc kubenswrapper[4821]: W0309 18:27:06.972054 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f4a1f9_c144_4ccb_ba8a_1a7861a5b665.slice/crio-976007b8c4993388e2811e3c16db8c488b4e2682378c9a0d478b1f654ea5beb3 WatchSource:0}: Error finding container 976007b8c4993388e2811e3c16db8c488b4e2682378c9a0d478b1f654ea5beb3: Status 404 returned error can't find the container with id 976007b8c4993388e2811e3c16db8c488b4e2682378c9a0d478b1f654ea5beb3 Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.972458 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.472437891 +0000 UTC m=+164.633813747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.972297 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:06 crc kubenswrapper[4821]: I0309 18:27:06.973104 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:06 crc kubenswrapper[4821]: E0309 18:27:06.973462 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.47344656 +0000 UTC m=+164.634822416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.034749 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.035706 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551346-phdwt"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.048829 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.074674 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.075282 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.575266044 +0000 UTC m=+164.736641900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.177574 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.179981 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.180298 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.68028227 +0000 UTC m=+164.841658116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.182446 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gg4ds"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.218236 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d9kvs"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.247137 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.282353 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.282502 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.782474705 +0000 UTC m=+164.943850561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.282796 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.283106 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.783092893 +0000 UTC m=+164.944468749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.297639 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-295wb" podStartSLOduration=107.297619876 podStartE2EDuration="1m47.297619876s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:07.292198628 +0000 UTC m=+164.453574494" watchObservedRunningTime="2026-03-09 18:27:07.297619876 +0000 UTC m=+164.458995732" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.321547 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.333681 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-djphk"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.341901 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" event={"ID":"8886c330-bce2-4801-be16-59eeddddaf6f","Type":"ContainerStarted","Data":"aa54b368b1b01bc290a393f827c034bb87e4ed830ef5c5cd2179ef4dbd027aee"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.344911 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4ntmx" event={"ID":"a28f17d7-69dc-4014-a347-a26f55d55ace","Type":"ContainerStarted","Data":"88d2782809b3e63ff358c12386419a8d7eca56b3d7433a2a057d4946b5c01da0"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.346167 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-vdrsg"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.351988 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" event={"ID":"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4","Type":"ContainerStarted","Data":"7fcd5d25f71fa6261477623572cbcadd197f014e22ecd6fcf84e6a5f1ff6b788"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.357542 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" event={"ID":"d45565d5-bd55-4e94-8cac-0155e00f1368","Type":"ContainerStarted","Data":"d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.358420 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.359882 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" event={"ID":"6d35d28f-2377-46c5-95aa-ea3bf280a60e","Type":"ContainerStarted","Data":"291e4f121113c06d808f6b70538aeee8dace0be9fcb3d3239d63d17c6de044bd"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.359905 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" event={"ID":"6d35d28f-2377-46c5-95aa-ea3bf280a60e","Type":"ContainerStarted","Data":"cc0b86a17d9eb4cbec1612a3f2d68cfbd5e69830624f5f2e5e367bd46d6a2722"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.360412 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.361971 4821 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tnl4x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.362004 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" podUID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.367013 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" event={"ID":"a423d95a-7bd6-483e-ba23-28e8f1a3ec92","Type":"ContainerStarted","Data":"08bd686bfe7069706a8fe0246a585c1ae9d68b195db6667bf139dd4dcad3c180"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.369153 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551346-phdwt" event={"ID":"60628f60-1633-4b77-a457-762d204bab20","Type":"ContainerStarted","Data":"2cd28a7e88894e1093a6eb940b7133a2585d977e63afe939651cd9ee639f90db"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.370351 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sst54" event={"ID":"d9788fbc-230b-4324-ba04-c706c0278411","Type":"ContainerStarted","Data":"8dca113651020dfe1598fe28fe8a8b0c206999196446be1c8e0f162ef2fa7518"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.371450 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" event={"ID":"cebe05d8-86f1-4280-9ae0-8065f9c38759","Type":"ContainerStarted","Data":"3fefc37f03ddd4c49be90a24bd8c2b8814855eac0707b38c32949c4867904c1b"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.372629 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" event={"ID":"7dfd5d64-f6dc-40bd-83d1-57e685cd4535","Type":"ContainerStarted","Data":"b914283aba494e7c80b8a7e8284a6baf9a361ef31e7f1c77f08c6e5e3bd9c47c"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.373725 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c257s" event={"ID":"8e5b560f-cc32-4a1a-8632-383befaabb5a","Type":"ContainerStarted","Data":"c6affda0a0b52c37353a6f30d0e54e98d583f098c43e19fdbbb1a99d2ea51227"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.374663 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" event={"ID":"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6","Type":"ContainerStarted","Data":"29f83fc71cb226ff6a2ecd85676f6f7cf04570c03163a767a51ca2f9525a6ba4"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.376514 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" event={"ID":"1b7ef6fd-c836-460d-bac0-ac2135ad77a2","Type":"ContainerStarted","Data":"ca6f65894bcb6c5cec9b77a91cd20bee4ee2812ed99a6ee92a97c97c0d9f329b"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.377724 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" event={"ID":"a3e149e2-c719-4025-888c-3134dd07b7c4","Type":"ContainerStarted","Data":"9eeffbdd0ce8b6197b6ee32e14f0de52939b65dc089f55488f36e4511da6133d"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.377747 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" event={"ID":"a3e149e2-c719-4025-888c-3134dd07b7c4","Type":"ContainerStarted","Data":"008bc91d838ad3c8f56f787c05673dcedbad659db4ba76378a4a4be86616ae46"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.379123 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" event={"ID":"87c7fa5b-e1e9-43c4-9942-409c34ea5660","Type":"ContainerStarted","Data":"9cd376b1377897eec35c4f901f9ad95aa898e74066c9af2578282764d10c28ec"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.381713 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" event={"ID":"1ec71021-0474-49ce-b545-4a973703b42b","Type":"ContainerStarted","Data":"6498972589c9231715c75b96e3d0656e03e24a5d0f5cbc136b4ed9dad10e5c9a"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.383079 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" event={"ID":"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665","Type":"ContainerStarted","Data":"976007b8c4993388e2811e3c16db8c488b4e2682378c9a0d478b1f654ea5beb3"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.383896 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.383987 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.883967549 +0000 UTC m=+165.045343405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.384194 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.384510 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.884499735 +0000 UTC m=+165.045875671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.384546 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" event={"ID":"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c","Type":"ContainerStarted","Data":"c00e304ef0cec43cfa43d784987451212c5981c5e327de89a5fb44d0dfae2500"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.384583 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" event={"ID":"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c","Type":"ContainerStarted","Data":"0bd5de2070e65d3d19666e87d508fbdb4d6c4fd46c4b09c7c822a6715e49aa24"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.391341 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" event={"ID":"a663703c-95db-4871-b31c-00951488935d","Type":"ContainerStarted","Data":"46cc8bb192a11de4fd49732f3391a4bf61ecce7fad90acc24c22150ceec3bbd1"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.391613 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" event={"ID":"a663703c-95db-4871-b31c-00951488935d","Type":"ContainerStarted","Data":"574ad56816421dbef1b3c2e14da70c046c125fd229ffdd3d8466e6b34e732f21"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.461348 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" event={"ID":"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56","Type":"ContainerStarted","Data":"9ed594c2523b0f25f0a4932798e2253c50820b99548e48d7c16a96c49b959fd0"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.466672 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" event={"ID":"87962440-47ce-4659-a2a7-f00110cc3bd5","Type":"ContainerStarted","Data":"17d73b68001a97e64e83a155333db07596bba3ac661fc5ea08a167a57e1bc894"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.467872 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" event={"ID":"24aa1fc6-da2a-400c-8bfe-022af0ee3707","Type":"ContainerStarted","Data":"49ba405844371aaf08ee706a33271971e45c722ba30f42fc059d014b5516029a"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.481917 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" event={"ID":"43d18118-9a44-4b09-add9-7df52470e1c7","Type":"ContainerStarted","Data":"bdc2e88f84d61a3ab0a3864be9ac3e8a1aff13d9bb2bde30f71494728c2e1ff5"} Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.484817 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-295wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.484857 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-295wb" podUID="f078c2bb-b4ba-42a0-a66c-705c19866fec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.485294 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.486641 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:07.986619568 +0000 UTC m=+165.147995424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.591457 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.601501 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.101485261 +0000 UTC m=+165.262861117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.709619 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.710350 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.210298248 +0000 UTC m=+165.371674104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.750856 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.765996 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.768736 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sh5rp"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.789310 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.797037 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x9nnw"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.826151 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.826544 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.326531791 +0000 UTC m=+165.487907647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.863095 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.911067 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.929624 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.929787 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.429763686 +0000 UTC m=+165.591139542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.930072 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:07 crc kubenswrapper[4821]: E0309 18:27:07.930399 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.430391094 +0000 UTC m=+165.591766950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.943328 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.953219 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" podStartSLOduration=107.953203308 podStartE2EDuration="1m47.953203308s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:07.949511071 +0000 UTC m=+165.110886927" watchObservedRunningTime="2026-03-09 18:27:07.953203308 +0000 UTC m=+165.114579164" Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.953438 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.957136 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf"] Mar 09 18:27:07 crc kubenswrapper[4821]: I0309 18:27:07.988255 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr"] Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.005785 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4pwqq"] Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.031914 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.035791 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.535773772 +0000 UTC m=+165.697149628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.135374 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.135811 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.635794942 +0000 UTC m=+165.797170798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.247821 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.249693 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.749674077 +0000 UTC m=+165.911049923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.251757 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.252101 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.752089418 +0000 UTC m=+165.913465274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.352540 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.352766 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.852743498 +0000 UTC m=+166.014119354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.429257 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" podStartSLOduration=108.429239384 podStartE2EDuration="1m48.429239384s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.428195374 +0000 UTC m=+165.589571230" watchObservedRunningTime="2026-03-09 18:27:08.429239384 +0000 UTC m=+165.590615240" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.429853 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9pcrm" podStartSLOduration=108.429845351 podStartE2EDuration="1m48.429845351s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.366858768 +0000 UTC m=+165.528234614" watchObservedRunningTime="2026-03-09 18:27:08.429845351 +0000 UTC m=+165.591221227" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.454397 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.454766 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:08.954754717 +0000 UTC m=+166.116130573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.491861 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" podStartSLOduration=108.491842767 podStartE2EDuration="1m48.491842767s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.491664001 +0000 UTC m=+165.653039857" watchObservedRunningTime="2026-03-09 18:27:08.491842767 +0000 UTC m=+165.653218623" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.491965 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-66c4m" podStartSLOduration=108.49196132 podStartE2EDuration="1m48.49196132s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.459044362 +0000 UTC m=+165.620420218" watchObservedRunningTime="2026-03-09 18:27:08.49196132 +0000 UTC m=+165.653337166" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.503453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x9nnw" event={"ID":"8d862d47-cde7-4a39-aafe-3e2cf7ef451f","Type":"ContainerStarted","Data":"36e66c19ed4f1a6d1a5d85f4f287fb8660f82128f68476d8f305683360b678c1"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.509272 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" event={"ID":"d2d34b0b-073c-47cb-9c2c-e2863dc06c23","Type":"ContainerStarted","Data":"4d59edf8d41eea67e5ed8efd2117c2dad215cc1f581a8ce956616773503897dd"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.511381 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" event={"ID":"5275d8b9-8874-4c24-96b9-fdef4ef32d9b","Type":"ContainerStarted","Data":"47c8e834512f4a5f7f9db09bbc5ff5a24d78212ac89038899429e03d3345e7df"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.520541 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" event={"ID":"87962440-47ce-4659-a2a7-f00110cc3bd5","Type":"ContainerStarted","Data":"9e621f587191b4856133d1d5023804c1421dcfef39dae0f6cc47cacf3c237a5c"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.522838 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" event={"ID":"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb","Type":"ContainerStarted","Data":"32e2a6a1a56274fdd145c75a176834f9a15e14eb78d5b41e9918d8da76347c72"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.525545 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" event={"ID":"04e006fb-bb29-4683-b3a9-a17698564fa6","Type":"ContainerStarted","Data":"4b0833d00a493f4d73c22f908e5e56203000e036861be426a3b897f08eac3a35"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.536788 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" event={"ID":"cebe05d8-86f1-4280-9ae0-8065f9c38759","Type":"ContainerStarted","Data":"d6742aa92cdc45de37b9f5184a1c11e28dfa420ea19744c4d2aec47eef4fcc3d"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.542772 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" event={"ID":"7227208b-b4f1-473c-9149-2a1c4d1cab32","Type":"ContainerStarted","Data":"97ec55115931729f96cd0176bfcd4ecdd5e31b21498047129e97cccbb9b5abad"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.544860 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" event={"ID":"243465ec-ca31-4ec1-b5ca-1e1318f37c16","Type":"ContainerStarted","Data":"9e57897c93c00a2c98b560351e3335277bb63b0ec7111de3a7cb7cb44e62ba38"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.547620 4821 generic.go:334] "Generic (PLEG): container finished" podID="8886c330-bce2-4801-be16-59eeddddaf6f" containerID="7f145a58c2060b3157f68d2470097a476a9797f2213b93908cef8bb3b92d7aa7" exitCode=0 Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.547683 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" event={"ID":"8886c330-bce2-4801-be16-59eeddddaf6f","Type":"ContainerDied","Data":"7f145a58c2060b3157f68d2470097a476a9797f2213b93908cef8bb3b92d7aa7"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.557749 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.558049 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.058025123 +0000 UTC m=+166.219400979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.558937 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.561484 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.061472713 +0000 UTC m=+166.222848569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.565360 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" event={"ID":"a663703c-95db-4871-b31c-00951488935d","Type":"ContainerStarted","Data":"38e1ec9c6137bc1eac7e8c8d85725b56c6d9777b37bc73b819d2dbd830935c2c"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.567417 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jrr9g" podStartSLOduration=108.567403286 podStartE2EDuration="1m48.567403286s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.566199981 +0000 UTC m=+165.727575837" watchObservedRunningTime="2026-03-09 18:27:08.567403286 +0000 UTC m=+165.728779132" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.622329 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" event={"ID":"2528c75b-c6dc-4347-b2e5-8279c1861c53","Type":"ContainerStarted","Data":"cdd5ceb60550cea076be07bcde38abf8effdffb27006802deed01d40b6ca0a3c"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.623270 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" event={"ID":"2528c75b-c6dc-4347-b2e5-8279c1861c53","Type":"ContainerStarted","Data":"c43b93bbf6416a75fbfaa7fdc4b661be27c3bc405da1f83e506e0898e0d53018"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.631219 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" event={"ID":"24aa1fc6-da2a-400c-8bfe-022af0ee3707","Type":"ContainerStarted","Data":"2fa47ab3753d6df83236442b7ffa18a14e6a14c6a23cafcd925aec1efbb0ab7f"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.637539 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" podStartSLOduration=108.637519298 podStartE2EDuration="1m48.637519298s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.608688368 +0000 UTC m=+165.770064234" watchObservedRunningTime="2026-03-09 18:27:08.637519298 +0000 UTC m=+165.798895144" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.643189 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" event={"ID":"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c","Type":"ContainerStarted","Data":"3f42431bb22dbff8bb1149cc07e5488c14ced97c78d612c2dd7f3a42ba180464"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.663283 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.664612 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.164570094 +0000 UTC m=+166.325946020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.664948 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" event={"ID":"1ec71021-0474-49ce-b545-4a973703b42b","Type":"ContainerStarted","Data":"277a7c70780f512e2f1a20063e2ccde879d92ddb0d3a62472c380c4b915aef70"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.666388 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vlwv5" podStartSLOduration=108.666368457 podStartE2EDuration="1m48.666368457s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.663917086 +0000 UTC m=+165.825292942" watchObservedRunningTime="2026-03-09 18:27:08.666368457 +0000 UTC m=+165.827744303" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.726301 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" event={"ID":"b6b5dbe9-77c4-4cd4-b639-ade5dff8134c","Type":"ContainerStarted","Data":"6312aab47732102750a1be363e5b8c61603b6e51dbd8404ba26be6083aeb0608"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.731902 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-djphk" event={"ID":"2e487bc2-9b7d-4845-a026-b27c82e6257a","Type":"ContainerStarted","Data":"bf02c4a71d835de21ae9bacbe48af91beb0d3536b9a55ad25899c84e605155a2"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.731968 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-djphk" event={"ID":"2e487bc2-9b7d-4845-a026-b27c82e6257a","Type":"ContainerStarted","Data":"a2ecab5303b9c821eba1c26b9b31adc45bcc07e1dc6cd5e2d90a6f1afe3045b6"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.741098 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" event={"ID":"934a74a7-234e-44f1-bc6e-a13661836b6b","Type":"ContainerStarted","Data":"e140344f6c789979876aa118d3a402dd29deee96e3545d9b2abb66f3c304f94a"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.741147 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" event={"ID":"934a74a7-234e-44f1-bc6e-a13661836b6b","Type":"ContainerStarted","Data":"3dc4e1369253a96fd1813fcd704bdc9a80832f90533e455fd26f556c03cdbc54"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.742368 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.743906 4821 patch_prober.go:28] interesting pod/console-operator-58897d9998-gg4ds container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.743958 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" podUID="934a74a7-234e-44f1-bc6e-a13661836b6b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.748031 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" event={"ID":"1722e733-725b-4985-8365-0f8f3ad0d10d","Type":"ContainerStarted","Data":"cf66adac4ceca01e46883a253b65be19b19259c3fe8139ee6def815d9cadfe1a"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.748969 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7fdbf" podStartSLOduration=108.74895248 podStartE2EDuration="1m48.74895248s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.748468346 +0000 UTC m=+165.909844202" watchObservedRunningTime="2026-03-09 18:27:08.74895248 +0000 UTC m=+165.910328336" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.755373 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs"] Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.759575 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" event={"ID":"7dfd5d64-f6dc-40bd-83d1-57e685cd4535","Type":"ContainerStarted","Data":"6ce2be06df22d2e5c77de4dfb020332b379717980885404f34aa5493c683b447"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.765119 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.765514 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.265498482 +0000 UTC m=+166.426874338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.769828 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" event={"ID":"d87105c8-2398-44ec-b127-a2e30e767c1d","Type":"ContainerStarted","Data":"41a438dc438823c4d9a13dfc730e06cc9a322c6fd222361e7e9a3f642bf34b32"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.773959 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4ntmx" event={"ID":"a28f17d7-69dc-4014-a347-a26f55d55ace","Type":"ContainerStarted","Data":"15fb8cfceb397144138e66b61e052bccbd9d0b815239d2031312c8f09e1fed95"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.778138 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" podStartSLOduration=108.77811922 podStartE2EDuration="1m48.77811922s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.776820612 +0000 UTC m=+165.938196468" watchObservedRunningTime="2026-03-09 18:27:08.77811922 +0000 UTC m=+165.939495076" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.797670 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" event={"ID":"e4ce9992-733a-4ac6-ab14-610ac4ced250","Type":"ContainerStarted","Data":"2bd5d389f341a8b6c661aea2dad977292399c79fff00fabc6f2b621ff638b808"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.827247 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" event={"ID":"f080ed0b-402b-4ed1-89cd-5ee1af2b9735","Type":"ContainerStarted","Data":"9aab150eeb486f1550f48c6f239ca3edaf56560ea1452d3b731f91f4567d5e92"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.828889 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-djphk" podStartSLOduration=5.828859917 podStartE2EDuration="5.828859917s" podCreationTimestamp="2026-03-09 18:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.826881919 +0000 UTC m=+165.988257775" watchObservedRunningTime="2026-03-09 18:27:08.828859917 +0000 UTC m=+165.990235813" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.831592 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vmzvr"] Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.866236 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.867310 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.367294806 +0000 UTC m=+166.528670662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.870376 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" event={"ID":"75d58d1e-e673-4305-9d09-2cfd323769fd","Type":"ContainerStarted","Data":"63fb95ff18207ba157d64e2ddd4de495ea1de82ca5ed52335d6b2153b91fb429"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.879849 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" event={"ID":"9016680a-98b9-4503-a9d6-251355aaecc3","Type":"ContainerStarted","Data":"c01df90ac3425e443217e86f479e894f31d95c44aad642749af8f62c8643cd4d"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.894482 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" podStartSLOduration=108.894461746 podStartE2EDuration="1m48.894461746s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.88533846 +0000 UTC m=+166.046714336" watchObservedRunningTime="2026-03-09 18:27:08.894461746 +0000 UTC m=+166.055837602" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.895292 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4ntmx" podStartSLOduration=108.89528277 podStartE2EDuration="1m48.89528277s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.864594807 +0000 UTC m=+166.025970663" watchObservedRunningTime="2026-03-09 18:27:08.89528277 +0000 UTC m=+166.056658626" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.915691 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" event={"ID":"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4","Type":"ContainerStarted","Data":"b6ce98dd2db018f7b8a9c0f628fb9a5ebbd40c02b5b07ba78f6b05f7682e96ff"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.915730 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" event={"ID":"a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4","Type":"ContainerStarted","Data":"4c89919ca89632210173ccb1bd6ee7230afc585abe8ade27d92b910fdcddd098"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.954711 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-h8j2t" podStartSLOduration=108.954691889 podStartE2EDuration="1m48.954691889s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:08.954012569 +0000 UTC m=+166.115388445" watchObservedRunningTime="2026-03-09 18:27:08.954691889 +0000 UTC m=+166.116067765" Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.969726 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:08 crc kubenswrapper[4821]: E0309 18:27:08.970061 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.470046536 +0000 UTC m=+166.631422392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.991163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sst54" event={"ID":"d9788fbc-230b-4324-ba04-c706c0278411","Type":"ContainerStarted","Data":"3fa5ae9a462f541b92b0be0bdbc4dc1ae8f363ed67135d65e2fb5f7c980a403f"} Mar 09 18:27:08 crc kubenswrapper[4821]: I0309 18:27:08.997601 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" event={"ID":"e3f4a1f9-c144-4ccb-ba8a-1a7861a5b665","Type":"ContainerStarted","Data":"9e127768f10f50a7a888046ae4cf9d28003842a87936c7332988dc20b27bc559"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.029936 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-sst54" podStartSLOduration=6.029917659 podStartE2EDuration="6.029917659s" podCreationTimestamp="2026-03-09 18:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.027509829 +0000 UTC m=+166.188885705" watchObservedRunningTime="2026-03-09 18:27:09.029917659 +0000 UTC m=+166.191293525" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.041006 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" event={"ID":"43d18118-9a44-4b09-add9-7df52470e1c7","Type":"ContainerStarted","Data":"3a0efddfd4e08514006a5cebb7010270197bfc8c94ed683b7f3cddbe00896f70"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.041822 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.043266 4821 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x76nr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.043333 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" podUID="43d18118-9a44-4b09-add9-7df52470e1c7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.064867 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" event={"ID":"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56","Type":"ContainerStarted","Data":"5267639d1b40b8d0a47829649ed4cc773eed9710e4dca98c1041946c1f8334ae"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.065626 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.068126 4821 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7nw2x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.068167 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.070366 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.070697 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.570675535 +0000 UTC m=+166.732051391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.071708 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.075367 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.575352041 +0000 UTC m=+166.736727897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.089045 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" event={"ID":"1b7ef6fd-c836-460d-bac0-ac2135ad77a2","Type":"ContainerStarted","Data":"dd9bf2bbeb1d22945624c7329be7478bb737907ace13caf54ced4587b47f024c"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.089260 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" event={"ID":"1b7ef6fd-c836-460d-bac0-ac2135ad77a2","Type":"ContainerStarted","Data":"86f4c8785b3afeeb3d4782276a4b7c15b28252b9105a8a42ada1864af37c8337"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.111203 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" podStartSLOduration=109.111179384 podStartE2EDuration="1m49.111179384s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.090137901 +0000 UTC m=+166.251513757" watchObservedRunningTime="2026-03-09 18:27:09.111179384 +0000 UTC m=+166.272555240" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.111648 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxsvf" podStartSLOduration=109.111642068 podStartE2EDuration="1m49.111642068s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.065851284 +0000 UTC m=+166.227227140" watchObservedRunningTime="2026-03-09 18:27:09.111642068 +0000 UTC m=+166.273017924" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.121012 4821 generic.go:334] "Generic (PLEG): container finished" podID="aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6" containerID="0a811d6844e203972542c47ab361461bd78916c823fd061a3d2dd02480740f6d" exitCode=0 Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.121094 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" event={"ID":"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6","Type":"ContainerDied","Data":"0a811d6844e203972542c47ab361461bd78916c823fd061a3d2dd02480740f6d"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.135756 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" event={"ID":"04e01207-4a95-4a32-84df-2d4c69d71fbf","Type":"ContainerStarted","Data":"5e99358e0e63011b9fb172059cbac3d70650db9dbe54ecfe4f22dab3a7dafb07"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.136645 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.151745 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" podStartSLOduration=109.151727144 podStartE2EDuration="1m49.151727144s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.122640108 +0000 UTC m=+166.284015964" watchObservedRunningTime="2026-03-09 18:27:09.151727144 +0000 UTC m=+166.313103000" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.172444 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.173234 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.6732192 +0000 UTC m=+166.834595056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.176129 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nprxk" podStartSLOduration=109.176108214 podStartE2EDuration="1m49.176108214s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.152894847 +0000 UTC m=+166.314270693" watchObservedRunningTime="2026-03-09 18:27:09.176108214 +0000 UTC m=+166.337484070" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.179263 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.181549 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" event={"ID":"87c7fa5b-e1e9-43c4-9942-409c34ea5660","Type":"ContainerStarted","Data":"327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.182253 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.183145 4821 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vmzvr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.183180 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" podUID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.199441 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c257s" event={"ID":"8e5b560f-cc32-4a1a-8632-383befaabb5a","Type":"ContainerStarted","Data":"7177951b1820807e67bc727e18a0b590f518021e2af6ba44b610530c31759338"} Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.204279 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" podStartSLOduration=6.204255623 podStartE2EDuration="6.204255623s" podCreationTimestamp="2026-03-09 18:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.196209889 +0000 UTC m=+166.357585745" watchObservedRunningTime="2026-03-09 18:27:09.204255623 +0000 UTC m=+166.365631479" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.228754 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.258880 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" podStartSLOduration=109.258865272 podStartE2EDuration="1m49.258865272s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:09.25639506 +0000 UTC m=+166.417770926" watchObservedRunningTime="2026-03-09 18:27:09.258865272 +0000 UTC m=+166.420241128" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.274463 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.275370 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.775355322 +0000 UTC m=+166.936731178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.382066 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.382233 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.882206472 +0000 UTC m=+167.043582328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.382571 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.382903 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.882891363 +0000 UTC m=+167.044267219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.432566 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.433179 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.457916 4821 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m6q6r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]log ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]etcd ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/generic-apiserver-start-informers ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/max-in-flight-filter ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 09 18:27:09 crc kubenswrapper[4821]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/project.openshift.io-projectcache ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/openshift.io-startinformers ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 09 18:27:09 crc kubenswrapper[4821]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 09 18:27:09 crc kubenswrapper[4821]: livez check failed Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.457971 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" podUID="ea3fa689-2665-423f-b717-f2e279be3831" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.484871 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.486667 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:09.986646423 +0000 UTC m=+167.148022279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.544605 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.553092 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:09 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:09 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:09 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.553132 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.606994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.608896 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.10888223 +0000 UTC m=+167.270258086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.709113 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.709304 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.209277622 +0000 UTC m=+167.370653478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.709800 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.710186 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.210178628 +0000 UTC m=+167.371554484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.811250 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.811400 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.311383724 +0000 UTC m=+167.472759580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.811635 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.811917 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.31190996 +0000 UTC m=+167.473285816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.855447 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kzlwq"] Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.914173 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.914448 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.414421223 +0000 UTC m=+167.575797079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:09 crc kubenswrapper[4821]: I0309 18:27:09.914692 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:09 crc kubenswrapper[4821]: E0309 18:27:09.914981 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.41497437 +0000 UTC m=+167.576350226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.015562 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.015752 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.515726441 +0000 UTC m=+167.677102297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.016001 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.016299 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.516291999 +0000 UTC m=+167.677667855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.116258 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48128: no serving certificate available for the kubelet" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.116679 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.116880 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.616855066 +0000 UTC m=+167.778230922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.116940 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.117276 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.617264268 +0000 UTC m=+167.778640124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.160486 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48134: no serving certificate available for the kubelet" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.210717 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" event={"ID":"9016680a-98b9-4503-a9d6-251355aaecc3","Type":"ContainerStarted","Data":"3acf452693d12920963bc3a1cfb797ee644288987a81a7722a22d7d5f48d032b"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.210771 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" event={"ID":"9016680a-98b9-4503-a9d6-251355aaecc3","Type":"ContainerStarted","Data":"3fc8bd607c09eb448e528c87dbe51cedf112d635581f23ec3433b680d743f13f"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.217629 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.217754 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.717735912 +0000 UTC m=+167.879111768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.217877 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.218308 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.718290558 +0000 UTC m=+167.879666414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.219236 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" event={"ID":"e4ce9992-733a-4ac6-ab14-610ac4ced250","Type":"ContainerStarted","Data":"e400268e3a08c41deccc55dde70014ff5cf58069f14422fa3f06495241d0bcf1"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.225499 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" event={"ID":"04e006fb-bb29-4683-b3a9-a17698564fa6","Type":"ContainerStarted","Data":"fb7868e82771793518b9f67dec1ec5bcdb565c4c2249e1e4c1002aa3fe7d88af"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.231721 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" event={"ID":"7dfd5d64-f6dc-40bd-83d1-57e685cd4535","Type":"ContainerStarted","Data":"8ca19f5e038d232482343471246d5fb4f2858ae54af7b4bef07f692ae90c88b5"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.231770 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.237562 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48140: no serving certificate available for the kubelet" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.238547 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x9nnw" event={"ID":"8d862d47-cde7-4a39-aafe-3e2cf7ef451f","Type":"ContainerStarted","Data":"26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.239567 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-d9kvs" podStartSLOduration=110.239550536 podStartE2EDuration="1m50.239550536s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.237856468 +0000 UTC m=+167.399232324" watchObservedRunningTime="2026-03-09 18:27:10.239550536 +0000 UTC m=+167.400926392" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.246444 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" event={"ID":"7227208b-b4f1-473c-9149-2a1c4d1cab32","Type":"ContainerStarted","Data":"d48ed1086af69e9c73cd5baab04b119b7fd6b73d178dddde5cb2d05010d2eb82"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.252656 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" event={"ID":"d2d34b0b-073c-47cb-9c2c-e2863dc06c23","Type":"ContainerStarted","Data":"7013d923963141411254d8f619830949378ad8cfe783d52bfc245c75ac7d9244"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.253427 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.254758 4821 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jrc42 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.254804 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" podUID="d2d34b0b-073c-47cb-9c2c-e2863dc06c23" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.266346 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" event={"ID":"f080ed0b-402b-4ed1-89cd-5ee1af2b9735","Type":"ContainerStarted","Data":"253ff51246138c5b2d0b1aab42bede7dc88f95484dbe5257e629fc92c73759e5"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.273003 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" event={"ID":"1722e733-725b-4985-8365-0f8f3ad0d10d","Type":"ContainerStarted","Data":"bd6eb1e77362159aa15f07aac74b1a12f93bde6c186d9b7b48c12c968855ebd7"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.274069 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.303238 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" event={"ID":"243465ec-ca31-4ec1-b5ca-1e1318f37c16","Type":"ContainerStarted","Data":"717df32558bd3f9bc19dd60bfaf12ed905307fb87a60c102775be7415b0531b8"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.304461 4821 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2lkq2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.304494 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" podUID="1722e733-725b-4985-8365-0f8f3ad0d10d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.309299 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" event={"ID":"8886c330-bce2-4801-be16-59eeddddaf6f","Type":"ContainerStarted","Data":"1d6c9220e55787417f626f6bfe35cf4eb301c64286be66a4adc8e8928934b82f"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.309797 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.319313 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.320744 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.820726789 +0000 UTC m=+167.982102645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.338466 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-c257s" event={"ID":"8e5b560f-cc32-4a1a-8632-383befaabb5a","Type":"ContainerStarted","Data":"46216302f55aaebe2d584f6df93f7892e9642cac5bcd1785bf1b8d3c03b7d1e4"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.338690 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-c257s" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.340460 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" event={"ID":"1ec71021-0474-49ce-b545-4a973703b42b","Type":"ContainerStarted","Data":"18f496803685cfb8cc7917de8f97e4b418c46aec34eef5ec72f63682aab9fc7e"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.346457 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" event={"ID":"5275d8b9-8874-4c24-96b9-fdef4ef32d9b","Type":"ContainerStarted","Data":"42e54caa04077669dc111a8ce61fb2c8d8123a671715ac4d3d36c38d541805f0"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.346496 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" event={"ID":"5275d8b9-8874-4c24-96b9-fdef4ef32d9b","Type":"ContainerStarted","Data":"757c83dfdb5fac3003aa5d1f871d139c113c1b7cad7a6cf7d2e528e1c4484e60"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.349872 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" event={"ID":"ba8c991f-dcb9-4206-ad42-dedc0f6d04cb","Type":"ContainerStarted","Data":"c8e83aa99713795fc5c8769f02ba0ef0c9ddcf6597ca928d45034adf24785b8f"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.355887 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" event={"ID":"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c","Type":"ContainerStarted","Data":"6c1f3b41ca628899a4c32729eaf86e0fec7c29a59623147234462ad6531945f7"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.362988 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctxcr" podStartSLOduration=110.362971339 podStartE2EDuration="1m50.362971339s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.362722721 +0000 UTC m=+167.524098567" watchObservedRunningTime="2026-03-09 18:27:10.362971339 +0000 UTC m=+167.524347185" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.364939 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" podStartSLOduration=110.364886095 podStartE2EDuration="1m50.364886095s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.316767184 +0000 UTC m=+167.478143040" watchObservedRunningTime="2026-03-09 18:27:10.364886095 +0000 UTC m=+167.526261951" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.375033 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gk67k" event={"ID":"d87105c8-2398-44ec-b127-a2e30e767c1d","Type":"ContainerStarted","Data":"11f4e7aa392e70414f92b9793b85396500b75ed06ade43766145c1991390a30d"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.387114 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48146: no serving certificate available for the kubelet" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.389288 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dbk5m" podStartSLOduration=110.389272645 podStartE2EDuration="1m50.389272645s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.387103931 +0000 UTC m=+167.548479787" watchObservedRunningTime="2026-03-09 18:27:10.389272645 +0000 UTC m=+167.550648501" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.398491 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" event={"ID":"aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6","Type":"ContainerStarted","Data":"234b3be78a153fae35db6211a3278ec08e63d4f99b8ab2fe87ef4b6353b1d75f"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.414594 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" event={"ID":"04e01207-4a95-4a32-84df-2d4c69d71fbf","Type":"ContainerStarted","Data":"0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.424980 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" podStartSLOduration=110.424966294 podStartE2EDuration="1m50.424966294s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.422558914 +0000 UTC m=+167.583934770" watchObservedRunningTime="2026-03-09 18:27:10.424966294 +0000 UTC m=+167.586342150" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.425174 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.427023 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:10.927010963 +0000 UTC m=+168.088386819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.446018 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-46tk5" podStartSLOduration=110.446001746 podStartE2EDuration="1m50.446001746s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.44474443 +0000 UTC m=+167.606120286" watchObservedRunningTime="2026-03-09 18:27:10.446001746 +0000 UTC m=+167.607377602" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.446806 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" event={"ID":"75d58d1e-e673-4305-9d09-2cfd323769fd","Type":"ContainerStarted","Data":"f89e15b2dabcbf69be6be172811c1d2f694a6c98684b4d3025d36f5bb072e254"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.478122 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" event={"ID":"2528c75b-c6dc-4347-b2e5-8279c1861c53","Type":"ContainerStarted","Data":"1570fd4a0ff7cbe584ef22476dc4d0be3eda1ea7a3884a57e070c6fd7dcd7ba1"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.492603 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" event={"ID":"87962440-47ce-4659-a2a7-f00110cc3bd5","Type":"ContainerStarted","Data":"76f8292aefed779ff55794ec8ae9393364060b9426d8c7cdea950e2ec1ae6643"} Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.494363 4821 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7nw2x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.494776 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.495099 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" podUID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" containerName="controller-manager" containerID="cri-o://327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2" gracePeriod=30 Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.495156 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" podUID="d45565d5-bd55-4e94-8cac-0155e00f1368" containerName="route-controller-manager" containerID="cri-o://d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79" gracePeriod=30 Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.525993 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x76nr" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.526150 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.527254 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.02723335 +0000 UTC m=+168.188609206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.529753 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.545812 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hh9sf" podStartSLOduration=110.545792791 podStartE2EDuration="1m50.545792791s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.486694311 +0000 UTC m=+167.648070167" watchObservedRunningTime="2026-03-09 18:27:10.545792791 +0000 UTC m=+167.707168647" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.550866 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:10 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:10 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:10 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.550930 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.559387 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48154: no serving certificate available for the kubelet" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.583687 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" podStartSLOduration=110.583670243 podStartE2EDuration="1m50.583670243s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.546629185 +0000 UTC m=+167.708005041" watchObservedRunningTime="2026-03-09 18:27:10.583670243 +0000 UTC m=+167.745046099" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.628543 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.628903 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.128891449 +0000 UTC m=+168.290267305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.644539 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ppt7p" podStartSLOduration=110.644520294 podStartE2EDuration="1m50.644520294s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.639138847 +0000 UTC m=+167.800514703" watchObservedRunningTime="2026-03-09 18:27:10.644520294 +0000 UTC m=+167.805896150" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.644772 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4pwqq" podStartSLOduration=110.644768011 podStartE2EDuration="1m50.644768011s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.584553319 +0000 UTC m=+167.745929175" watchObservedRunningTime="2026-03-09 18:27:10.644768011 +0000 UTC m=+167.806143867" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.716198 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48170: no serving certificate available for the kubelet" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.721361 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sh5rp" podStartSLOduration=110.72134458 podStartE2EDuration="1m50.72134458s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.719670712 +0000 UTC m=+167.881046568" watchObservedRunningTime="2026-03-09 18:27:10.72134458 +0000 UTC m=+167.882720436" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.722422 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" podStartSLOduration=110.722416452 podStartE2EDuration="1m50.722416452s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.685311502 +0000 UTC m=+167.846687358" watchObservedRunningTime="2026-03-09 18:27:10.722416452 +0000 UTC m=+167.883792308" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.731473 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.731847 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.231831915 +0000 UTC m=+168.393207771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.740808 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5ncxl" podStartSLOduration=110.740792736 podStartE2EDuration="1m50.740792736s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.739715465 +0000 UTC m=+167.901091311" watchObservedRunningTime="2026-03-09 18:27:10.740792736 +0000 UTC m=+167.902168592" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.766765 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x9nnw" podStartSLOduration=110.766746992 podStartE2EDuration="1m50.766746992s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.765718602 +0000 UTC m=+167.927094458" watchObservedRunningTime="2026-03-09 18:27:10.766746992 +0000 UTC m=+167.928122848" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.825029 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" podStartSLOduration=110.825011937 podStartE2EDuration="1m50.825011937s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.797690783 +0000 UTC m=+167.959066639" watchObservedRunningTime="2026-03-09 18:27:10.825011937 +0000 UTC m=+167.986387803" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.826149 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-c257s" podStartSLOduration=7.826142411 podStartE2EDuration="7.826142411s" podCreationTimestamp="2026-03-09 18:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.823862524 +0000 UTC m=+167.985238390" watchObservedRunningTime="2026-03-09 18:27:10.826142411 +0000 UTC m=+167.987518267" Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.833228 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.833550 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.333539155 +0000 UTC m=+168.494915011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.934392 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.934641 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.434610597 +0000 UTC m=+168.595986463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:10 crc kubenswrapper[4821]: I0309 18:27:10.934923 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:10 crc kubenswrapper[4821]: E0309 18:27:10.935237 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.435229916 +0000 UTC m=+168.596605772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.043843 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.044150 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.544134266 +0000 UTC m=+168.705510122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.051134 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-znqzp" podStartSLOduration=111.051118849 podStartE2EDuration="1m51.051118849s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:10.987746004 +0000 UTC m=+168.149121880" watchObservedRunningTime="2026-03-09 18:27:11.051118849 +0000 UTC m=+168.212494705" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.065648 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48184: no serving certificate available for the kubelet" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.120679 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.131000 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.131363 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.135399 4821 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-bsmz7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.31:8443/livez\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.135440 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" podUID="aca75c59-dc3a-4bd4-aeed-4ee5c715e5f6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.31:8443/livez\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.145745 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45565d5-bd55-4e94-8cac-0155e00f1368-serving-cert\") pod \"d45565d5-bd55-4e94-8cac-0155e00f1368\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.145833 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-client-ca\") pod \"d45565d5-bd55-4e94-8cac-0155e00f1368\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.145861 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-config\") pod \"d45565d5-bd55-4e94-8cac-0155e00f1368\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.145897 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5msr4\" (UniqueName: \"kubernetes.io/projected/d45565d5-bd55-4e94-8cac-0155e00f1368-kube-api-access-5msr4\") pod \"d45565d5-bd55-4e94-8cac-0155e00f1368\" (UID: \"d45565d5-bd55-4e94-8cac-0155e00f1368\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.146061 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.146439 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.646427243 +0000 UTC m=+168.807803099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.148303 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-config" (OuterVolumeSpecName: "config") pod "d45565d5-bd55-4e94-8cac-0155e00f1368" (UID: "d45565d5-bd55-4e94-8cac-0155e00f1368"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.148829 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-client-ca" (OuterVolumeSpecName: "client-ca") pod "d45565d5-bd55-4e94-8cac-0155e00f1368" (UID: "d45565d5-bd55-4e94-8cac-0155e00f1368"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.160545 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45565d5-bd55-4e94-8cac-0155e00f1368-kube-api-access-5msr4" (OuterVolumeSpecName: "kube-api-access-5msr4") pod "d45565d5-bd55-4e94-8cac-0155e00f1368" (UID: "d45565d5-bd55-4e94-8cac-0155e00f1368"). InnerVolumeSpecName "kube-api-access-5msr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.161808 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45565d5-bd55-4e94-8cac-0155e00f1368-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45565d5-bd55-4e94-8cac-0155e00f1368" (UID: "d45565d5-bd55-4e94-8cac-0155e00f1368"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.179890 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5glb" podStartSLOduration=111.179875237 podStartE2EDuration="1m51.179875237s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:11.178667822 +0000 UTC m=+168.340043678" watchObservedRunningTime="2026-03-09 18:27:11.179875237 +0000 UTC m=+168.341251093" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.229915 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" podStartSLOduration=111.229898753 podStartE2EDuration="1m51.229898753s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:11.212555508 +0000 UTC m=+168.373931364" watchObservedRunningTime="2026-03-09 18:27:11.229898753 +0000 UTC m=+168.391274609" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.251383 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.251655 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45565d5-bd55-4e94-8cac-0155e00f1368-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.251693 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.251705 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45565d5-bd55-4e94-8cac-0155e00f1368-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.251713 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5msr4\" (UniqueName: \"kubernetes.io/projected/d45565d5-bd55-4e94-8cac-0155e00f1368-kube-api-access-5msr4\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.251776 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.751762039 +0000 UTC m=+168.913137895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.271013 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl"] Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.271281 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45565d5-bd55-4e94-8cac-0155e00f1368" containerName="route-controller-manager" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.271304 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45565d5-bd55-4e94-8cac-0155e00f1368" containerName="route-controller-manager" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.293791 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45565d5-bd55-4e94-8cac-0155e00f1368" containerName="route-controller-manager" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.294217 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.296350 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl"] Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.347739 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.353225 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-serving-cert\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.353307 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.353344 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-client-ca\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.353361 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-config\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.353396 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-kube-api-access-f69sj\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.353692 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.853680955 +0000 UTC m=+169.015056811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.453969 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-config\") pod \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454070 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c7fa5b-e1e9-43c4-9942-409c34ea5660-serving-cert\") pod \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454096 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwzbc\" (UniqueName: \"kubernetes.io/projected/87c7fa5b-e1e9-43c4-9942-409c34ea5660-kube-api-access-dwzbc\") pod \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-proxy-ca-bundles\") pod \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454268 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454309 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-client-ca\") pod \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\" (UID: \"87c7fa5b-e1e9-43c4-9942-409c34ea5660\") " Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454527 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-client-ca\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454571 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-config\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454607 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-kube-api-access-f69sj\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.454683 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-serving-cert\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.456475 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:11.956448427 +0000 UTC m=+169.117824283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.457201 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-config" (OuterVolumeSpecName: "config") pod "87c7fa5b-e1e9-43c4-9942-409c34ea5660" (UID: "87c7fa5b-e1e9-43c4-9942-409c34ea5660"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.457940 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-client-ca" (OuterVolumeSpecName: "client-ca") pod "87c7fa5b-e1e9-43c4-9942-409c34ea5660" (UID: "87c7fa5b-e1e9-43c4-9942-409c34ea5660"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.458862 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-client-ca\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.459850 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-config\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.464548 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-serving-cert\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.464761 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "87c7fa5b-e1e9-43c4-9942-409c34ea5660" (UID: "87c7fa5b-e1e9-43c4-9942-409c34ea5660"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.465460 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c7fa5b-e1e9-43c4-9942-409c34ea5660-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "87c7fa5b-e1e9-43c4-9942-409c34ea5660" (UID: "87c7fa5b-e1e9-43c4-9942-409c34ea5660"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.481708 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c7fa5b-e1e9-43c4-9942-409c34ea5660-kube-api-access-dwzbc" (OuterVolumeSpecName: "kube-api-access-dwzbc") pod "87c7fa5b-e1e9-43c4-9942-409c34ea5660" (UID: "87c7fa5b-e1e9-43c4-9942-409c34ea5660"). InnerVolumeSpecName "kube-api-access-dwzbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.490243 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-kube-api-access-f69sj\") pod \"route-controller-manager-84964ccc5c-8jqgl\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.495395 4821 patch_prober.go:28] interesting pod/console-operator-58897d9998-gg4ds container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.495462 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" podUID="934a74a7-234e-44f1-bc6e-a13661836b6b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.505225 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48196: no serving certificate available for the kubelet" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.556541 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:11 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:11 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:11 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.556596 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.557136 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.557254 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.557264 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87c7fa5b-e1e9-43c4-9942-409c34ea5660-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.557274 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwzbc\" (UniqueName: \"kubernetes.io/projected/87c7fa5b-e1e9-43c4-9942-409c34ea5660-kube-api-access-dwzbc\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.557282 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.557290 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/87c7fa5b-e1e9-43c4-9942-409c34ea5660-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.557559 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.05754662 +0000 UTC m=+169.218922476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.558805 4821 generic.go:334] "Generic (PLEG): container finished" podID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" containerID="327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2" exitCode=0 Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.558911 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.586374 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" event={"ID":"87c7fa5b-e1e9-43c4-9942-409c34ea5660","Type":"ContainerDied","Data":"327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2"} Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.586415 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vmzvr" event={"ID":"87c7fa5b-e1e9-43c4-9942-409c34ea5660","Type":"ContainerDied","Data":"9cd376b1377897eec35c4f901f9ad95aa898e74066c9af2578282764d10c28ec"} Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.586434 4821 scope.go:117] "RemoveContainer" containerID="327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.608588 4821 generic.go:334] "Generic (PLEG): container finished" podID="d45565d5-bd55-4e94-8cac-0155e00f1368" containerID="d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79" exitCode=0 Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.611232 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.617084 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" event={"ID":"d45565d5-bd55-4e94-8cac-0155e00f1368","Type":"ContainerDied","Data":"d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79"} Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.617136 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs" event={"ID":"d45565d5-bd55-4e94-8cac-0155e00f1368","Type":"ContainerDied","Data":"ac8dc429fe1ac6ddc16ea17005d1a84acdff6444a78c53dea99b7b58ccc69236"} Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.617920 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" gracePeriod=30 Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.626649 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.630892 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.633701 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jrc42" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.660092 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.666143 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.16611458 +0000 UTC m=+169.327490436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.669745 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.671988 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.17197022 +0000 UTC m=+169.333346076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.775862 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.777610 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.277595184 +0000 UTC m=+169.438971040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.831421 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vmzvr"] Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.835416 4821 scope.go:117] "RemoveContainer" containerID="327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.836523 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2\": container with ID starting with 327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2 not found: ID does not exist" containerID="327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.836537 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vmzvr"] Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.836564 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2"} err="failed to get container status \"327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2\": rpc error: code = NotFound desc = could not find container \"327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2\": container with ID starting with 327f0aae23ddc4958db4ec0eac1c1794174ce36a46ce1f77e6453b2f4ee884b2 not found: ID does not exist" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.843624 4821 scope.go:117] "RemoveContainer" containerID="d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.859712 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs"] Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.868281 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qqsgs"] Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.883802 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.884132 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.384117614 +0000 UTC m=+169.545493470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.895588 4821 scope.go:117] "RemoveContainer" containerID="d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79" Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.896033 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79\": container with ID starting with d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79 not found: ID does not exist" containerID="d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.896062 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79"} err="failed to get container status \"d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79\": rpc error: code = NotFound desc = could not find container \"d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79\": container with ID starting with d58ddc3101638c12eaffdca16457e8342a08db404f91e41c9679e3b9a1c89f79 not found: ID does not exist" Mar 09 18:27:11 crc kubenswrapper[4821]: I0309 18:27:11.985774 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:11 crc kubenswrapper[4821]: E0309 18:27:11.986129 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.486114634 +0000 UTC m=+169.647490490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.011947 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gg4ds" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.032089 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2lkq2" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.090471 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.090831 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.590819691 +0000 UTC m=+169.752195547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.192925 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.193276 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.693261883 +0000 UTC m=+169.854637739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.203715 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl"] Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.206312 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48206: no serving certificate available for the kubelet" Mar 09 18:27:12 crc kubenswrapper[4821]: W0309 18:27:12.244095 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb226d5cf_72e1_42b2_85ce_fcb78889ae4c.slice/crio-c962d2227deb3bc0695d5919def4481997039bed1f6dc0dfbf48738553129a77 WatchSource:0}: Error finding container c962d2227deb3bc0695d5919def4481997039bed1f6dc0dfbf48738553129a77: Status 404 returned error can't find the container with id c962d2227deb3bc0695d5919def4481997039bed1f6dc0dfbf48738553129a77 Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.259793 4821 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.294896 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.295189 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.795172039 +0000 UTC m=+169.956547895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.380349 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l97hl" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.395694 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.396018 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.896003114 +0000 UTC m=+170.057378970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.498285 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.498651 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:12.998636422 +0000 UTC m=+170.160012278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.546845 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:12 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:12 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:12 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.546909 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.601654 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.602072 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-09 18:27:13.102058032 +0000 UTC m=+170.263433888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.617421 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" event={"ID":"75d58d1e-e673-4305-9d09-2cfd323769fd","Type":"ContainerStarted","Data":"4f6232766c1d950faa27e9c7aa7f0de7460365608a62ad3ef1676f49a5390ece"} Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.617468 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" event={"ID":"75d58d1e-e673-4305-9d09-2cfd323769fd","Type":"ContainerStarted","Data":"950884752bec78c844eeb407b73404f82de888b833840b306b588821c771ca84"} Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.621264 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" event={"ID":"b226d5cf-72e1-42b2-85ce-fcb78889ae4c","Type":"ContainerStarted","Data":"8a42f38f95da6665e285dedd039af5293f1031baddf5781825a5c29952c213c8"} Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.621300 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" event={"ID":"b226d5cf-72e1-42b2-85ce-fcb78889ae4c","Type":"ContainerStarted","Data":"c962d2227deb3bc0695d5919def4481997039bed1f6dc0dfbf48738553129a77"} Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.622290 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.623197 4821 patch_prober.go:28] interesting pod/route-controller-manager-84964ccc5c-8jqgl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.623232 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.657175 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" podStartSLOduration=4.657156245 podStartE2EDuration="4.657156245s" podCreationTimestamp="2026-03-09 18:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:12.6411663 +0000 UTC m=+169.802542156" watchObservedRunningTime="2026-03-09 18:27:12.657156245 +0000 UTC m=+169.818532101" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.702884 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.704160 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-09 18:27:13.204144313 +0000 UTC m=+170.365520169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xbxp5" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.716575 4821 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-09T18:27:12.259814741Z","Handler":null,"Name":""} Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.718655 4821 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.718684 4821 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.783702 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nk4bg"] Mar 09 18:27:12 crc kubenswrapper[4821]: E0309 18:27:12.783929 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" containerName="controller-manager" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.783943 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" containerName="controller-manager" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.784075 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" containerName="controller-manager" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.785373 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.787659 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.794436 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nk4bg"] Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.803895 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.808784 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.905427 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.905507 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jw8d\" (UniqueName: \"kubernetes.io/projected/07a1db8f-6912-4ff8-9943-24c334031dfb-kube-api-access-5jw8d\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.905609 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-utilities\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.905626 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-catalog-content\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.908520 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.908549 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.928591 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xbxp5\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.987715 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sn8zk"] Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.989074 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.991032 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 09 18:27:12 crc kubenswrapper[4821]: I0309 18:27:12.994143 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sn8zk"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.007663 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-utilities\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.007696 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-catalog-content\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.007841 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jw8d\" (UniqueName: \"kubernetes.io/projected/07a1db8f-6912-4ff8-9943-24c334031dfb-kube-api-access-5jw8d\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.008544 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-catalog-content\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.008601 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-utilities\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.034584 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jw8d\" (UniqueName: \"kubernetes.io/projected/07a1db8f-6912-4ff8-9943-24c334031dfb-kube-api-access-5jw8d\") pod \"community-operators-nk4bg\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.108073 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.108700 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7w5g\" (UniqueName: \"kubernetes.io/projected/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-kube-api-access-h7w5g\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.108750 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-catalog-content\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.108774 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-utilities\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.127387 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.182955 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kpr7q"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.183837 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.190564 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kpr7q"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.209851 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-utilities\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.209978 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7w5g\" (UniqueName: \"kubernetes.io/projected/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-kube-api-access-h7w5g\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.210004 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-catalog-content\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.210439 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-utilities\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.210553 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-catalog-content\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.237392 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7w5g\" (UniqueName: \"kubernetes.io/projected/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-kube-api-access-h7w5g\") pod \"certified-operators-sn8zk\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.315887 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-utilities\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.315995 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtsrd\" (UniqueName: \"kubernetes.io/projected/902f7680-4f21-43d9-9ca1-16e5746556a9-kube-api-access-xtsrd\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.316023 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-catalog-content\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.330801 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.349075 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nk4bg"] Mar 09 18:27:13 crc kubenswrapper[4821]: W0309 18:27:13.357601 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a1db8f_6912_4ff8_9943_24c334031dfb.slice/crio-85a8bca09db8732d24af0b309e1b26277bb38db2b96719905012c11a1f9088e9 WatchSource:0}: Error finding container 85a8bca09db8732d24af0b309e1b26277bb38db2b96719905012c11a1f9088e9: Status 404 returned error can't find the container with id 85a8bca09db8732d24af0b309e1b26277bb38db2b96719905012c11a1f9088e9 Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.380972 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hzkk5"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.383223 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.392129 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzkk5"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.417136 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtsrd\" (UniqueName: \"kubernetes.io/projected/902f7680-4f21-43d9-9ca1-16e5746556a9-kube-api-access-xtsrd\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.417192 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-catalog-content\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.417232 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-utilities\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.417652 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-utilities\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.418069 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-catalog-content\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.424559 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xbxp5"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.436875 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtsrd\" (UniqueName: \"kubernetes.io/projected/902f7680-4f21-43d9-9ca1-16e5746556a9-kube-api-access-xtsrd\") pod \"community-operators-kpr7q\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: W0309 18:27:13.438584 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0a42c85_7fab_45fc_b0b0_df2ae5082cd8.slice/crio-fb3579d4693ea2f74a975e02218e71df16cb1642b5f7b227f44c6549cb013536 WatchSource:0}: Error finding container fb3579d4693ea2f74a975e02218e71df16cb1642b5f7b227f44c6549cb013536: Status 404 returned error can't find the container with id fb3579d4693ea2f74a975e02218e71df16cb1642b5f7b227f44c6549cb013536 Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.510955 4821 ???:1] "http: TLS handshake error from 192.168.126.11:48210: no serving certificate available for the kubelet" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.518741 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-catalog-content\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.518833 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-utilities\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.518854 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq94p\" (UniqueName: \"kubernetes.io/projected/423c9815-b133-45cf-bc0c-3f6291e1106b-kube-api-access-lq94p\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.521238 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79567c6bd7-nzp8k"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.522294 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.525889 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.526361 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.526551 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.526662 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.527550 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.528054 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.531803 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.539949 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.548945 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:13 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:13 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:13 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.549064 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.551814 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79567c6bd7-nzp8k"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.573370 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c7fa5b-e1e9-43c4-9942-409c34ea5660" path="/var/lib/kubelet/pods/87c7fa5b-e1e9-43c4-9942-409c34ea5660/volumes" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.574338 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.575011 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45565d5-bd55-4e94-8cac-0155e00f1368" path="/var/lib/kubelet/pods/d45565d5-bd55-4e94-8cac-0155e00f1368/volumes" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.619953 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-catalog-content\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.619997 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-config\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620063 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00edbc1a-39e7-43a3-8534-6b645d2859f5-serving-cert\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620084 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hckk8\" (UniqueName: \"kubernetes.io/projected/00edbc1a-39e7-43a3-8534-6b645d2859f5-kube-api-access-hckk8\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620106 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-proxy-ca-bundles\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620127 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-utilities\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620142 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq94p\" (UniqueName: \"kubernetes.io/projected/423c9815-b133-45cf-bc0c-3f6291e1106b-kube-api-access-lq94p\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620177 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-client-ca\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620852 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-utilities\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.620905 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-catalog-content\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.637441 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" event={"ID":"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8","Type":"ContainerStarted","Data":"c46f3f486c116c0b4c8b13755c275a70b7a2dc5214375a6103fea84fa1ac5d04"} Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.637479 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" event={"ID":"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8","Type":"ContainerStarted","Data":"fb3579d4693ea2f74a975e02218e71df16cb1642b5f7b227f44c6549cb013536"} Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.638246 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.641095 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq94p\" (UniqueName: \"kubernetes.io/projected/423c9815-b133-45cf-bc0c-3f6291e1106b-kube-api-access-lq94p\") pod \"certified-operators-hzkk5\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.642984 4821 generic.go:334] "Generic (PLEG): container finished" podID="aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" containerID="6c1f3b41ca628899a4c32729eaf86e0fec7c29a59623147234462ad6531945f7" exitCode=0 Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.643024 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" event={"ID":"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c","Type":"ContainerDied","Data":"6c1f3b41ca628899a4c32729eaf86e0fec7c29a59623147234462ad6531945f7"} Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.645867 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" event={"ID":"75d58d1e-e673-4305-9d09-2cfd323769fd","Type":"ContainerStarted","Data":"b594aa5cbf66677cc05f22e93e2cd796a360631acddb84772499d4502da0b6e1"} Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.648661 4821 generic.go:334] "Generic (PLEG): container finished" podID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerID="13fef06871fa0a4e0871aaad1057236ae03da0e7577c7ae35df29a4adf7b9028" exitCode=0 Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.649034 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk4bg" event={"ID":"07a1db8f-6912-4ff8-9943-24c334031dfb","Type":"ContainerDied","Data":"13fef06871fa0a4e0871aaad1057236ae03da0e7577c7ae35df29a4adf7b9028"} Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.649076 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk4bg" event={"ID":"07a1db8f-6912-4ff8-9943-24c334031dfb","Type":"ContainerStarted","Data":"85a8bca09db8732d24af0b309e1b26277bb38db2b96719905012c11a1f9088e9"} Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.655154 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.663734 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" podStartSLOduration=113.663713773 podStartE2EDuration="1m53.663713773s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:13.659381947 +0000 UTC m=+170.820757813" watchObservedRunningTime="2026-03-09 18:27:13.663713773 +0000 UTC m=+170.825089629" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.704813 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.712910 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-vdrsg" podStartSLOduration=10.712886864 podStartE2EDuration="10.712886864s" podCreationTimestamp="2026-03-09 18:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:13.709688711 +0000 UTC m=+170.871064577" watchObservedRunningTime="2026-03-09 18:27:13.712886864 +0000 UTC m=+170.874262720" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.727555 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sn8zk"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.728468 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00edbc1a-39e7-43a3-8534-6b645d2859f5-serving-cert\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.728530 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hckk8\" (UniqueName: \"kubernetes.io/projected/00edbc1a-39e7-43a3-8534-6b645d2859f5-kube-api-access-hckk8\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.728595 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-proxy-ca-bundles\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.728801 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-client-ca\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.728954 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-config\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.731773 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-config\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.732041 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-client-ca\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.733735 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-proxy-ca-bundles\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.736147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00edbc1a-39e7-43a3-8534-6b645d2859f5-serving-cert\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.750994 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hckk8\" (UniqueName: \"kubernetes.io/projected/00edbc1a-39e7-43a3-8534-6b645d2859f5-kube-api-access-hckk8\") pod \"controller-manager-79567c6bd7-nzp8k\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.915921 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.921098 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hzkk5"] Mar 09 18:27:13 crc kubenswrapper[4821]: I0309 18:27:13.970041 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kpr7q"] Mar 09 18:27:14 crc kubenswrapper[4821]: W0309 18:27:14.003438 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod902f7680_4f21_43d9_9ca1_16e5746556a9.slice/crio-d3fe2292d0664f73b1e880a97e12b57b84ed5d6a3204167411877b8cb86bbd1c WatchSource:0}: Error finding container d3fe2292d0664f73b1e880a97e12b57b84ed5d6a3204167411877b8cb86bbd1c: Status 404 returned error can't find the container with id d3fe2292d0664f73b1e880a97e12b57b84ed5d6a3204167411877b8cb86bbd1c Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.166109 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.167541 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.170896 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.171045 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.172673 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.195028 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79567c6bd7-nzp8k"] Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.280989 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e28616e-2c71-45e4-b93c-e76014c89d0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.281465 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e28616e-2c71-45e4-b93c-e76014c89d0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.383146 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e28616e-2c71-45e4-b93c-e76014c89d0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.383272 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e28616e-2c71-45e4-b93c-e76014c89d0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.383368 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e28616e-2c71-45e4-b93c-e76014c89d0d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.406364 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e28616e-2c71-45e4-b93c-e76014c89d0d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.443766 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.451789 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m6q6r" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.512049 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.553709 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:14 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:14 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:14 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.554034 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.682772 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" event={"ID":"00edbc1a-39e7-43a3-8534-6b645d2859f5","Type":"ContainerStarted","Data":"2c0a1e0f79d2b9f63ec5e4717445820868bc3bfbe4dca1eaf72fe69e380c94e3"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.682812 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" event={"ID":"00edbc1a-39e7-43a3-8534-6b645d2859f5","Type":"ContainerStarted","Data":"a1c1213e156f97dc8302326aae0eeaf89bedcef2c1b99bf8c6466084a920494c"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.683511 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.684563 4821 patch_prober.go:28] interesting pod/controller-manager-79567c6bd7-nzp8k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.684594 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.692048 4821 generic.go:334] "Generic (PLEG): container finished" podID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerID="240788ce8a383b28c4bc5e8a7d15974180644722157d8bc64efe50a8238166af" exitCode=0 Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.692175 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerDied","Data":"240788ce8a383b28c4bc5e8a7d15974180644722157d8bc64efe50a8238166af"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.692203 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerStarted","Data":"2753c51f03bef59bd9f722e0306c0931942762d524f95d8b94ad6cfa70ba0ef1"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.697946 4821 generic.go:334] "Generic (PLEG): container finished" podID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerID="bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b" exitCode=0 Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.698006 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpr7q" event={"ID":"902f7680-4f21-43d9-9ca1-16e5746556a9","Type":"ContainerDied","Data":"bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.698034 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpr7q" event={"ID":"902f7680-4f21-43d9-9ca1-16e5746556a9","Type":"ContainerStarted","Data":"d3fe2292d0664f73b1e880a97e12b57b84ed5d6a3204167411877b8cb86bbd1c"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.702425 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" podStartSLOduration=6.702409785 podStartE2EDuration="6.702409785s" podCreationTimestamp="2026-03-09 18:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:14.701082457 +0000 UTC m=+171.862458323" watchObservedRunningTime="2026-03-09 18:27:14.702409785 +0000 UTC m=+171.863785641" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.705936 4821 generic.go:334] "Generic (PLEG): container finished" podID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerID="a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4" exitCode=0 Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.707369 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkk5" event={"ID":"423c9815-b133-45cf-bc0c-3f6291e1106b","Type":"ContainerDied","Data":"a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.707404 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkk5" event={"ID":"423c9815-b133-45cf-bc0c-3f6291e1106b","Type":"ContainerStarted","Data":"3dc54da9f889046a75ab954289b193c3cdc600bccb9b3e7e9ac119baef89a18f"} Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.986411 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nsq8f"] Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.987335 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:14 crc kubenswrapper[4821]: I0309 18:27:14.989623 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.005223 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsq8f"] Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.098039 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqw7\" (UniqueName: \"kubernetes.io/projected/132d5224-2c4a-4b22-9e2f-b50b98e3b693-kube-api-access-6rqw7\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.098106 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-utilities\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.098136 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-catalog-content\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.199065 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqw7\" (UniqueName: \"kubernetes.io/projected/132d5224-2c4a-4b22-9e2f-b50b98e3b693-kube-api-access-6rqw7\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.199170 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-utilities\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.199194 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-catalog-content\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.199911 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-utilities\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.200044 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-catalog-content\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.219620 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqw7\" (UniqueName: \"kubernetes.io/projected/132d5224-2c4a-4b22-9e2f-b50b98e3b693-kube-api-access-6rqw7\") pod \"redhat-marketplace-nsq8f\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.301095 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.383831 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5npmr"] Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.384869 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.428761 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npmr"] Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.504027 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-catalog-content\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.504132 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-utilities\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.504191 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdh6\" (UniqueName: \"kubernetes.io/projected/1ff9182f-57eb-4efa-b7c3-ae63d66457df-kube-api-access-lxdh6\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.546676 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:15 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:15 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:15 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.546791 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.605680 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-catalog-content\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.605782 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-utilities\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.605826 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdh6\" (UniqueName: \"kubernetes.io/projected/1ff9182f-57eb-4efa-b7c3-ae63d66457df-kube-api-access-lxdh6\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.606274 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-catalog-content\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.606376 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-utilities\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.625111 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdh6\" (UniqueName: \"kubernetes.io/projected/1ff9182f-57eb-4efa-b7c3-ae63d66457df-kube-api-access-lxdh6\") pod \"redhat-marketplace-5npmr\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.663395 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.664059 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.675381 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.675732 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.678598 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.719986 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.722188 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.749133 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-295wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.749176 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-295wb" podUID="f078c2bb-b4ba-42a0-a66c-705c19866fec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.749487 4821 patch_prober.go:28] interesting pod/downloads-7954f5f757-295wb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.749506 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-295wb" podUID="f078c2bb-b4ba-42a0-a66c-705c19866fec" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.809134 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3d6bf4f-253c-4694-ba14-62abd7d74285-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.809345 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3d6bf4f-253c-4694-ba14-62abd7d74285-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.910202 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3d6bf4f-253c-4694-ba14-62abd7d74285-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.910295 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3d6bf4f-253c-4694-ba14-62abd7d74285-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.910407 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3d6bf4f-253c-4694-ba14-62abd7d74285-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.938033 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3d6bf4f-253c-4694-ba14-62abd7d74285-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.987050 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:15 crc kubenswrapper[4821]: I0309 18:27:15.995643 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2h8qw"] Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:15.996634 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.006686 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.028685 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h8qw"] Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.117015 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-utilities\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.117070 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-catalog-content\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.117091 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6f6\" (UniqueName: \"kubernetes.io/projected/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-kube-api-access-qz6f6\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.117467 4821 ???:1] "http: TLS handshake error from 192.168.126.11:46106: no serving certificate available for the kubelet" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.133430 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.148528 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bsmz7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.194154 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pbkl7"] Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.195118 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.217282 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbkl7"] Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.217909 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-utilities\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.218264 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-utilities\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.221040 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-catalog-content\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.221086 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz6f6\" (UniqueName: \"kubernetes.io/projected/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-kube-api-access-qz6f6\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.221255 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.222771 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-catalog-content\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.229731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ac2c88b-a0bc-482c-90fa-165d30f045e8-metrics-certs\") pod \"network-metrics-daemon-lf7bd\" (UID: \"9ac2c88b-a0bc-482c-90fa-165d30f045e8\") " pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.251557 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz6f6\" (UniqueName: \"kubernetes.io/projected/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-kube-api-access-qz6f6\") pod \"redhat-operators-2h8qw\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.313623 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.324889 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-utilities\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.324968 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-catalog-content\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.325069 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmzw\" (UniqueName: \"kubernetes.io/projected/faa3533b-267b-44a9-b949-af82368bf7e3-kube-api-access-5nmzw\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.425854 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-catalog-content\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.425957 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nmzw\" (UniqueName: \"kubernetes.io/projected/faa3533b-267b-44a9-b949-af82368bf7e3-kube-api-access-5nmzw\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.426000 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-utilities\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.426476 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-utilities\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.426555 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-catalog-content\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.450475 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nmzw\" (UniqueName: \"kubernetes.io/projected/faa3533b-267b-44a9-b949-af82368bf7e3-kube-api-access-5nmzw\") pod \"redhat-operators-pbkl7\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.464225 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lf7bd" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.509417 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.543974 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.546281 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:16 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:16 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:16 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.546333 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.567524 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.567589 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.574575 4821 patch_prober.go:28] interesting pod/console-f9d7485db-x9nnw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 09 18:27:16 crc kubenswrapper[4821]: I0309 18:27:16.574658 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x9nnw" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 09 18:27:16 crc kubenswrapper[4821]: E0309 18:27:16.781678 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:16 crc kubenswrapper[4821]: E0309 18:27:16.785719 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:16 crc kubenswrapper[4821]: E0309 18:27:16.787097 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:16 crc kubenswrapper[4821]: E0309 18:27:16.787160 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" Mar 09 18:27:17 crc kubenswrapper[4821]: I0309 18:27:17.586954 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:17 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:17 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:17 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:17 crc kubenswrapper[4821]: I0309 18:27:17.587299 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.157720 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-c257s" Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.546571 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:18 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:18 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:18 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.546635 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.737520 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-gbjt5_a663703c-95db-4871-b31c-00951488935d/cluster-samples-operator/0.log" Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.737593 4821 generic.go:334] "Generic (PLEG): container finished" podID="a663703c-95db-4871-b31c-00951488935d" containerID="46cc8bb192a11de4fd49732f3391a4bf61ecce7fad90acc24c22150ceec3bbd1" exitCode=2 Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.737646 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" event={"ID":"a663703c-95db-4871-b31c-00951488935d","Type":"ContainerDied","Data":"46cc8bb192a11de4fd49732f3391a4bf61ecce7fad90acc24c22150ceec3bbd1"} Mar 09 18:27:18 crc kubenswrapper[4821]: I0309 18:27:18.738346 4821 scope.go:117] "RemoveContainer" containerID="46cc8bb192a11de4fd49732f3391a4bf61ecce7fad90acc24c22150ceec3bbd1" Mar 09 18:27:19 crc kubenswrapper[4821]: I0309 18:27:19.546819 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:19 crc kubenswrapper[4821]: [-]has-synced failed: reason withheld Mar 09 18:27:19 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:19 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:19 crc kubenswrapper[4821]: I0309 18:27:19.547081 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.546411 4821 patch_prober.go:28] interesting pod/router-default-5444994796-4ntmx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 18:27:20 crc kubenswrapper[4821]: [+]has-synced ok Mar 09 18:27:20 crc kubenswrapper[4821]: [+]process-running ok Mar 09 18:27:20 crc kubenswrapper[4821]: healthz check failed Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.546522 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4ntmx" podUID="a28f17d7-69dc-4014-a347-a26f55d55ace" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.888146 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.943088 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-secret-volume\") pod \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.943159 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rppn9\" (UniqueName: \"kubernetes.io/projected/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-kube-api-access-rppn9\") pod \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.943213 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume\") pod \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\" (UID: \"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c\") " Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.944508 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" (UID: "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.950476 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" (UID: "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:20 crc kubenswrapper[4821]: I0309 18:27:20.950780 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-kube-api-access-rppn9" (OuterVolumeSpecName: "kube-api-access-rppn9") pod "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" (UID: "aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c"). InnerVolumeSpecName "kube-api-access-rppn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.044950 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.044994 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.045005 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rppn9\" (UniqueName: \"kubernetes.io/projected/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c-kube-api-access-rppn9\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.284213 4821 ???:1] "http: TLS handshake error from 192.168.126.11:46120: no serving certificate available for the kubelet" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.481650 4821 ???:1] "http: TLS handshake error from 192.168.126.11:46126: no serving certificate available for the kubelet" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.546741 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.549422 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4ntmx" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.760010 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.760403 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf" event={"ID":"aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c","Type":"ContainerDied","Data":"3f42431bb22dbff8bb1149cc07e5488c14ced97c78d612c2dd7f3a42ba180464"} Mar 09 18:27:21 crc kubenswrapper[4821]: I0309 18:27:21.760427 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f42431bb22dbff8bb1149cc07e5488c14ced97c78d612c2dd7f3a42ba180464" Mar 09 18:27:24 crc kubenswrapper[4821]: I0309 18:27:24.955653 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 09 18:27:25 crc kubenswrapper[4821]: I0309 18:27:25.767444 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-295wb" Mar 09 18:27:26 crc kubenswrapper[4821]: I0309 18:27:26.570499 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:26 crc kubenswrapper[4821]: I0309 18:27:26.575266 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:27:26 crc kubenswrapper[4821]: E0309 18:27:26.781266 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:26 crc kubenswrapper[4821]: E0309 18:27:26.783424 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:26 crc kubenswrapper[4821]: E0309 18:27:26.785413 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:26 crc kubenswrapper[4821]: E0309 18:27:26.785440 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" Mar 09 18:27:28 crc kubenswrapper[4821]: I0309 18:27:28.447177 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79567c6bd7-nzp8k"] Mar 09 18:27:28 crc kubenswrapper[4821]: I0309 18:27:28.447400 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerName="controller-manager" containerID="cri-o://2c0a1e0f79d2b9f63ec5e4717445820868bc3bfbe4dca1eaf72fe69e380c94e3" gracePeriod=30 Mar 09 18:27:28 crc kubenswrapper[4821]: I0309 18:27:28.476153 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl"] Mar 09 18:27:28 crc kubenswrapper[4821]: I0309 18:27:28.477224 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerName="route-controller-manager" containerID="cri-o://8a42f38f95da6665e285dedd039af5293f1031baddf5781825a5c29952c213c8" gracePeriod=30 Mar 09 18:27:31 crc kubenswrapper[4821]: I0309 18:27:31.557971 4821 ???:1] "http: TLS handshake error from 192.168.126.11:37454: no serving certificate available for the kubelet" Mar 09 18:27:31 crc kubenswrapper[4821]: I0309 18:27:31.632517 4821 patch_prober.go:28] interesting pod/route-controller-manager-84964ccc5c-8jqgl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 09 18:27:31 crc kubenswrapper[4821]: I0309 18:27:31.632964 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 09 18:27:32 crc kubenswrapper[4821]: I0309 18:27:32.862004 4821 generic.go:334] "Generic (PLEG): container finished" podID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerID="2c0a1e0f79d2b9f63ec5e4717445820868bc3bfbe4dca1eaf72fe69e380c94e3" exitCode=0 Mar 09 18:27:32 crc kubenswrapper[4821]: I0309 18:27:32.862136 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" event={"ID":"00edbc1a-39e7-43a3-8534-6b645d2859f5","Type":"ContainerDied","Data":"2c0a1e0f79d2b9f63ec5e4717445820868bc3bfbe4dca1eaf72fe69e380c94e3"} Mar 09 18:27:33 crc kubenswrapper[4821]: I0309 18:27:33.136706 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:27:33 crc kubenswrapper[4821]: I0309 18:27:33.917030 4821 patch_prober.go:28] interesting pod/controller-manager-79567c6bd7-nzp8k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Mar 09 18:27:33 crc kubenswrapper[4821]: I0309 18:27:33.917287 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Mar 09 18:27:33 crc kubenswrapper[4821]: I0309 18:27:33.927713 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npmr"] Mar 09 18:27:33 crc kubenswrapper[4821]: I0309 18:27:33.964226 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h8qw"] Mar 09 18:27:34 crc kubenswrapper[4821]: I0309 18:27:34.015502 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbkl7"] Mar 09 18:27:34 crc kubenswrapper[4821]: I0309 18:27:34.018180 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 09 18:27:34 crc kubenswrapper[4821]: I0309 18:27:34.111544 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsq8f"] Mar 09 18:27:36 crc kubenswrapper[4821]: E0309 18:27:36.780487 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:36 crc kubenswrapper[4821]: E0309 18:27:36.782679 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:36 crc kubenswrapper[4821]: E0309 18:27:36.784178 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 18:27:36 crc kubenswrapper[4821]: E0309 18:27:36.784264 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" Mar 09 18:27:37 crc kubenswrapper[4821]: I0309 18:27:37.897917 4821 generic.go:334] "Generic (PLEG): container finished" podID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerID="8a42f38f95da6665e285dedd039af5293f1031baddf5781825a5c29952c213c8" exitCode=0 Mar 09 18:27:37 crc kubenswrapper[4821]: I0309 18:27:37.897966 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" event={"ID":"b226d5cf-72e1-42b2-85ce-fcb78889ae4c","Type":"ContainerDied","Data":"8a42f38f95da6665e285dedd039af5293f1031baddf5781825a5c29952c213c8"} Mar 09 18:27:37 crc kubenswrapper[4821]: I0309 18:27:37.899274 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e3d6bf4f-253c-4694-ba14-62abd7d74285","Type":"ContainerStarted","Data":"2ef7997827c57a36c6bf1f782cdd5e0b951001cde25c0d0a8a88d0e192360ffb"} Mar 09 18:27:39 crc kubenswrapper[4821]: W0309 18:27:39.494193 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ff9182f_57eb_4efa_b7c3_ae63d66457df.slice/crio-edb3a54553a03a37ca0248c59a99e7b74754fef2d58951418fa734884937c51b WatchSource:0}: Error finding container edb3a54553a03a37ca0248c59a99e7b74754fef2d58951418fa734884937c51b: Status 404 returned error can't find the container with id edb3a54553a03a37ca0248c59a99e7b74754fef2d58951418fa734884937c51b Mar 09 18:27:39 crc kubenswrapper[4821]: E0309 18:27:39.538203 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 09 18:27:39 crc kubenswrapper[4821]: E0309 18:27:39.538553 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jw8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nk4bg_openshift-marketplace(07a1db8f-6912-4ff8-9943-24c334031dfb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 09 18:27:39 crc kubenswrapper[4821]: E0309 18:27:39.539717 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nk4bg" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.908157 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerStarted","Data":"44707abfc8043739daab71658c10299246b9981b6f781130e441157fead4c083"} Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.908952 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7e28616e-2c71-45e4-b93c-e76014c89d0d","Type":"ContainerStarted","Data":"0878244dabde9e52ccc1fafc7be221710c222d9ce71b70df1e4d6fdbf1a656a1"} Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.910029 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerStarted","Data":"edb3a54553a03a37ca0248c59a99e7b74754fef2d58951418fa734884937c51b"} Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.912693 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerStarted","Data":"43be8128b41711a1b74e065d7322472332257449ab4a307ea698e7039e2243ab"} Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.915084 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-gbjt5_a663703c-95db-4871-b31c-00951488935d/cluster-samples-operator/0.log" Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.915220 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gbjt5" event={"ID":"a663703c-95db-4871-b31c-00951488935d","Type":"ContainerStarted","Data":"e8612b29054579d4f906cf863e6caf414a60e81ed11d2e847f871d5f50da3771"} Mar 09 18:27:39 crc kubenswrapper[4821]: I0309 18:27:39.915970 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbkl7" event={"ID":"faa3533b-267b-44a9-b949-af82368bf7e3","Type":"ContainerStarted","Data":"92d04f7e4b7433e9c23bc6aeb71ddb5cd6eb943b840fec0723fce2c0283e0c2b"} Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.370355 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nk4bg" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.436572 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.440993 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.458905 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.459087 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7w5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sn8zk_openshift-marketplace(70ed8562-ec3e-49a0-8ccd-885eea90e9c1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.461810 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sn8zk" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.470763 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb"] Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.471016 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" containerName="collect-profiles" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471029 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" containerName="collect-profiles" Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.471044 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerName="route-controller-manager" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471052 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerName="route-controller-manager" Mar 09 18:27:41 crc kubenswrapper[4821]: E0309 18:27:41.471064 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerName="controller-manager" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471071 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerName="controller-manager" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471187 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" containerName="collect-profiles" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471213 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" containerName="route-controller-manager" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471226 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" containerName="controller-manager" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.471659 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.479336 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb"] Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.522965 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00edbc1a-39e7-43a3-8534-6b645d2859f5-serving-cert\") pod \"00edbc1a-39e7-43a3-8534-6b645d2859f5\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523259 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-serving-cert\") pod \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523289 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-client-ca\") pod \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523390 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-config\") pod \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523412 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-config\") pod \"00edbc1a-39e7-43a3-8534-6b645d2859f5\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523433 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-kube-api-access-f69sj\") pod \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\" (UID: \"b226d5cf-72e1-42b2-85ce-fcb78889ae4c\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523469 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-proxy-ca-bundles\") pod \"00edbc1a-39e7-43a3-8534-6b645d2859f5\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523500 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-client-ca\") pod \"00edbc1a-39e7-43a3-8534-6b645d2859f5\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.523530 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckk8\" (UniqueName: \"kubernetes.io/projected/00edbc1a-39e7-43a3-8534-6b645d2859f5-kube-api-access-hckk8\") pod \"00edbc1a-39e7-43a3-8534-6b645d2859f5\" (UID: \"00edbc1a-39e7-43a3-8534-6b645d2859f5\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.524073 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b226d5cf-72e1-42b2-85ce-fcb78889ae4c" (UID: "b226d5cf-72e1-42b2-85ce-fcb78889ae4c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.524720 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s2kg\" (UniqueName: \"kubernetes.io/projected/e765b9fe-57ee-4233-8390-cdac364e3996-kube-api-access-2s2kg\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.524795 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-client-ca\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.524820 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-config\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.524854 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e765b9fe-57ee-4233-8390-cdac364e3996-serving-cert\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.524927 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.525491 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-config" (OuterVolumeSpecName: "config") pod "b226d5cf-72e1-42b2-85ce-fcb78889ae4c" (UID: "b226d5cf-72e1-42b2-85ce-fcb78889ae4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.525780 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-config" (OuterVolumeSpecName: "config") pod "00edbc1a-39e7-43a3-8534-6b645d2859f5" (UID: "00edbc1a-39e7-43a3-8534-6b645d2859f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.526406 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "00edbc1a-39e7-43a3-8534-6b645d2859f5" (UID: "00edbc1a-39e7-43a3-8534-6b645d2859f5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.526450 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "00edbc1a-39e7-43a3-8534-6b645d2859f5" (UID: "00edbc1a-39e7-43a3-8534-6b645d2859f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.529304 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00edbc1a-39e7-43a3-8534-6b645d2859f5-kube-api-access-hckk8" (OuterVolumeSpecName: "kube-api-access-hckk8") pod "00edbc1a-39e7-43a3-8534-6b645d2859f5" (UID: "00edbc1a-39e7-43a3-8534-6b645d2859f5"). InnerVolumeSpecName "kube-api-access-hckk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.529405 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00edbc1a-39e7-43a3-8534-6b645d2859f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00edbc1a-39e7-43a3-8534-6b645d2859f5" (UID: "00edbc1a-39e7-43a3-8534-6b645d2859f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.533732 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-kube-api-access-f69sj" (OuterVolumeSpecName: "kube-api-access-f69sj") pod "b226d5cf-72e1-42b2-85ce-fcb78889ae4c" (UID: "b226d5cf-72e1-42b2-85ce-fcb78889ae4c"). InnerVolumeSpecName "kube-api-access-f69sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.537702 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b226d5cf-72e1-42b2-85ce-fcb78889ae4c" (UID: "b226d5cf-72e1-42b2-85ce-fcb78889ae4c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625627 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s2kg\" (UniqueName: \"kubernetes.io/projected/e765b9fe-57ee-4233-8390-cdac364e3996-kube-api-access-2s2kg\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625744 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-client-ca\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625781 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-config\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625825 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e765b9fe-57ee-4233-8390-cdac364e3996-serving-cert\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625938 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625983 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.625996 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f69sj\" (UniqueName: \"kubernetes.io/projected/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-kube-api-access-f69sj\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.626009 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.626147 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00edbc1a-39e7-43a3-8534-6b645d2859f5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.626184 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hckk8\" (UniqueName: \"kubernetes.io/projected/00edbc1a-39e7-43a3-8534-6b645d2859f5-kube-api-access-hckk8\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.626222 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00edbc1a-39e7-43a3-8534-6b645d2859f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.626238 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b226d5cf-72e1-42b2-85ce-fcb78889ae4c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.627097 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-client-ca\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.627237 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-config\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.631392 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e765b9fe-57ee-4233-8390-cdac364e3996-serving-cert\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.655147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s2kg\" (UniqueName: \"kubernetes.io/projected/e765b9fe-57ee-4233-8390-cdac364e3996-kube-api-access-2s2kg\") pod \"route-controller-manager-9b4545b7f-skgzb\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.793019 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.829332 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lf7bd"] Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.914337 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kzlwq_04e01207-4a95-4a32-84df-2d4c69d71fbf/kube-multus-additional-cni-plugins/0.log" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.914422 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.931163 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist\") pod \"04e01207-4a95-4a32-84df-2d4c69d71fbf\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.931557 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04e01207-4a95-4a32-84df-2d4c69d71fbf-tuning-conf-dir\") pod \"04e01207-4a95-4a32-84df-2d4c69d71fbf\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.931603 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04e01207-4a95-4a32-84df-2d4c69d71fbf-ready\") pod \"04e01207-4a95-4a32-84df-2d4c69d71fbf\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.932666 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04e01207-4a95-4a32-84df-2d4c69d71fbf-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "04e01207-4a95-4a32-84df-2d4c69d71fbf" (UID: "04e01207-4a95-4a32-84df-2d4c69d71fbf"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.932854 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzcjb\" (UniqueName: \"kubernetes.io/projected/04e01207-4a95-4a32-84df-2d4c69d71fbf-kube-api-access-wzcjb\") pod \"04e01207-4a95-4a32-84df-2d4c69d71fbf\" (UID: \"04e01207-4a95-4a32-84df-2d4c69d71fbf\") " Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.933302 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "04e01207-4a95-4a32-84df-2d4c69d71fbf" (UID: "04e01207-4a95-4a32-84df-2d4c69d71fbf"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.933468 4821 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/04e01207-4a95-4a32-84df-2d4c69d71fbf-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.933493 4821 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/04e01207-4a95-4a32-84df-2d4c69d71fbf-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.934973 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e01207-4a95-4a32-84df-2d4c69d71fbf-ready" (OuterVolumeSpecName: "ready") pod "04e01207-4a95-4a32-84df-2d4c69d71fbf" (UID: "04e01207-4a95-4a32-84df-2d4c69d71fbf"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.937390 4821 generic.go:334] "Generic (PLEG): container finished" podID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerID="9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269" exitCode=0 Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.937435 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerDied","Data":"9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269"} Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.950864 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" event={"ID":"9ac2c88b-a0bc-482c-90fa-165d30f045e8","Type":"ContainerStarted","Data":"cdb3fa22c5bfb01900496cb3a85bb0fb1f639211aee7c3a48359fb26badb3917"} Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.974953 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" event={"ID":"00edbc1a-39e7-43a3-8534-6b645d2859f5","Type":"ContainerDied","Data":"a1c1213e156f97dc8302326aae0eeaf89bedcef2c1b99bf8c6466084a920494c"} Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.975014 4821 scope.go:117] "RemoveContainer" containerID="2c0a1e0f79d2b9f63ec5e4717445820868bc3bfbe4dca1eaf72fe69e380c94e3" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.975178 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79567c6bd7-nzp8k" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.977293 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e01207-4a95-4a32-84df-2d4c69d71fbf-kube-api-access-wzcjb" (OuterVolumeSpecName: "kube-api-access-wzcjb") pod "04e01207-4a95-4a32-84df-2d4c69d71fbf" (UID: "04e01207-4a95-4a32-84df-2d4c69d71fbf"). InnerVolumeSpecName "kube-api-access-wzcjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.989159 4821 generic.go:334] "Generic (PLEG): container finished" podID="faa3533b-267b-44a9-b949-af82368bf7e3" containerID="89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c" exitCode=0 Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.989351 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbkl7" event={"ID":"faa3533b-267b-44a9-b949-af82368bf7e3","Type":"ContainerDied","Data":"89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c"} Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.991610 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" event={"ID":"b226d5cf-72e1-42b2-85ce-fcb78889ae4c","Type":"ContainerDied","Data":"c962d2227deb3bc0695d5919def4481997039bed1f6dc0dfbf48738553129a77"} Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.991624 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl" Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.997607 4821 generic.go:334] "Generic (PLEG): container finished" podID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerID="3d2d2cf01882e65b7138c1461d311da211a3bf653eef4fce4832d9727245273c" exitCode=0 Mar 09 18:27:41 crc kubenswrapper[4821]: I0309 18:27:41.997849 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerDied","Data":"3d2d2cf01882e65b7138c1461d311da211a3bf653eef4fce4832d9727245273c"} Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.003481 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-kzlwq_04e01207-4a95-4a32-84df-2d4c69d71fbf/kube-multus-additional-cni-plugins/0.log" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.003572 4821 generic.go:334] "Generic (PLEG): container finished" podID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" exitCode=137 Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.003632 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.003629 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" event={"ID":"04e01207-4a95-4a32-84df-2d4c69d71fbf","Type":"ContainerDied","Data":"0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16"} Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.004039 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-kzlwq" event={"ID":"04e01207-4a95-4a32-84df-2d4c69d71fbf","Type":"ContainerDied","Data":"5e99358e0e63011b9fb172059cbac3d70650db9dbe54ecfe4f22dab3a7dafb07"} Mar 09 18:27:42 crc kubenswrapper[4821]: E0309 18:27:42.007305 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sn8zk" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.034215 4821 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/04e01207-4a95-4a32-84df-2d4c69d71fbf-ready\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.034245 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzcjb\" (UniqueName: \"kubernetes.io/projected/04e01207-4a95-4a32-84df-2d4c69d71fbf-kube-api-access-wzcjb\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.044635 4821 scope.go:117] "RemoveContainer" containerID="8a42f38f95da6665e285dedd039af5293f1031baddf5781825a5c29952c213c8" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.074777 4821 scope.go:117] "RemoveContainer" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.114806 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kzlwq"] Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.127931 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-kzlwq"] Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.132170 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79567c6bd7-nzp8k"] Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.135954 4821 scope.go:117] "RemoveContainer" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.136147 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79567c6bd7-nzp8k"] Mar 09 18:27:42 crc kubenswrapper[4821]: E0309 18:27:42.136518 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16\": container with ID starting with 0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16 not found: ID does not exist" containerID="0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.136558 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16"} err="failed to get container status \"0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16\": rpc error: code = NotFound desc = could not find container \"0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16\": container with ID starting with 0297c9480d2443eb8d56cbb4d92d376dab02b87b244db7afebe59a889188aa16 not found: ID does not exist" Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.139819 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl"] Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.146413 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84964ccc5c-8jqgl"] Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.259447 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb"] Mar 09 18:27:42 crc kubenswrapper[4821]: W0309 18:27:42.266886 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode765b9fe_57ee_4233_8390_cdac364e3996.slice/crio-a52f75d80a2410cd38a939e8c435efa9a3e31507d218b84112396de2d8d27195 WatchSource:0}: Error finding container a52f75d80a2410cd38a939e8c435efa9a3e31507d218b84112396de2d8d27195: Status 404 returned error can't find the container with id a52f75d80a2410cd38a939e8c435efa9a3e31507d218b84112396de2d8d27195 Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.567015 4821 csr.go:261] certificate signing request csr-pj544 is approved, waiting to be issued Mar 09 18:27:42 crc kubenswrapper[4821]: I0309 18:27:42.573863 4821 csr.go:257] certificate signing request csr-pj544 is issued Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.025772 4821 generic.go:334] "Generic (PLEG): container finished" podID="60628f60-1633-4b77-a457-762d204bab20" containerID="ab331e63d918c4fef53d485f4669a8304b76f49bf19c262bf753483e0089b2b5" exitCode=0 Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.025924 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551346-phdwt" event={"ID":"60628f60-1633-4b77-a457-762d204bab20","Type":"ContainerDied","Data":"ab331e63d918c4fef53d485f4669a8304b76f49bf19c262bf753483e0089b2b5"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.030346 4821 generic.go:334] "Generic (PLEG): container finished" podID="7e28616e-2c71-45e4-b93c-e76014c89d0d" containerID="8b304c41b9e3c4325829e180e71ca52aff5c690f54d088385796e67ec1c5172d" exitCode=0 Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.030463 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7e28616e-2c71-45e4-b93c-e76014c89d0d","Type":"ContainerDied","Data":"8b304c41b9e3c4325829e180e71ca52aff5c690f54d088385796e67ec1c5172d"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.032335 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerDied","Data":"8e8899a0cef2f57514f2cefb5c9157da54538a40f01e734434a5e7aa9ae01db1"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.034691 4821 generic.go:334] "Generic (PLEG): container finished" podID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerID="8e8899a0cef2f57514f2cefb5c9157da54538a40f01e734434a5e7aa9ae01db1" exitCode=0 Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.041632 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" event={"ID":"9ac2c88b-a0bc-482c-90fa-165d30f045e8","Type":"ContainerStarted","Data":"d9ce6d38eae50a5cf4c69feccea8a7d5e99e880169916b83fb2552cc057f5e80"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.041680 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lf7bd" event={"ID":"9ac2c88b-a0bc-482c-90fa-165d30f045e8","Type":"ContainerStarted","Data":"578bbb35bf995d98600ad85d8bbda84ef0ab01585bf3cd22199f3c5f62a1b91f"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.045698 4821 generic.go:334] "Generic (PLEG): container finished" podID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerID="baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5" exitCode=0 Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.045887 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpr7q" event={"ID":"902f7680-4f21-43d9-9ca1-16e5746556a9","Type":"ContainerDied","Data":"baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.049019 4821 generic.go:334] "Generic (PLEG): container finished" podID="e3d6bf4f-253c-4694-ba14-62abd7d74285" containerID="0ff2940f6f94a5a536a590ca3114fa051670fe8df1b79e0059df620a9c8fa148" exitCode=0 Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.049061 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e3d6bf4f-253c-4694-ba14-62abd7d74285","Type":"ContainerDied","Data":"0ff2940f6f94a5a536a590ca3114fa051670fe8df1b79e0059df620a9c8fa148"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.059951 4821 generic.go:334] "Generic (PLEG): container finished" podID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerID="c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51" exitCode=0 Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.060021 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkk5" event={"ID":"423c9815-b133-45cf-bc0c-3f6291e1106b","Type":"ContainerDied","Data":"c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.072224 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lf7bd" podStartSLOduration=143.072199118 podStartE2EDuration="2m23.072199118s" podCreationTimestamp="2026-03-09 18:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:43.067234921 +0000 UTC m=+200.228610777" watchObservedRunningTime="2026-03-09 18:27:43.072199118 +0000 UTC m=+200.233574974" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.076519 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" event={"ID":"e765b9fe-57ee-4233-8390-cdac364e3996","Type":"ContainerStarted","Data":"657fff38264c5dd01213c37e50b95f87798741756cfe167c124cd28c4ae603ba"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.076579 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" event={"ID":"e765b9fe-57ee-4233-8390-cdac364e3996","Type":"ContainerStarted","Data":"a52f75d80a2410cd38a939e8c435efa9a3e31507d218b84112396de2d8d27195"} Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.077215 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.085386 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.140607 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" podStartSLOduration=15.14057219 podStartE2EDuration="15.14057219s" podCreationTimestamp="2026-03-09 18:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:43.140068177 +0000 UTC m=+200.301444033" watchObservedRunningTime="2026-03-09 18:27:43.14057219 +0000 UTC m=+200.301948076" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.541627 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56d88bb98-2zgqh"] Mar 09 18:27:43 crc kubenswrapper[4821]: E0309 18:27:43.542161 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.542173 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.542268 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" containerName="kube-multus-additional-cni-plugins" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.545459 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.546579 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56d88bb98-2zgqh"] Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.548545 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.549707 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.553908 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.554252 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.555071 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.555353 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.557333 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.557715 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-serving-cert\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.557747 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-proxy-ca-bundles\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.557790 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhthj\" (UniqueName: \"kubernetes.io/projected/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-kube-api-access-bhthj\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.557831 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-config\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.557867 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-client-ca\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.571772 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00edbc1a-39e7-43a3-8534-6b645d2859f5" path="/var/lib/kubelet/pods/00edbc1a-39e7-43a3-8534-6b645d2859f5/volumes" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.572479 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e01207-4a95-4a32-84df-2d4c69d71fbf" path="/var/lib/kubelet/pods/04e01207-4a95-4a32-84df-2d4c69d71fbf/volumes" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.573212 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b226d5cf-72e1-42b2-85ce-fcb78889ae4c" path="/var/lib/kubelet/pods/b226d5cf-72e1-42b2-85ce-fcb78889ae4c/volumes" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.575200 4821 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-11 03:27:46.961869379 +0000 UTC Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.575232 4821 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6633h0m3.386640802s for next certificate rotation Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.659122 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-client-ca\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.659190 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-serving-cert\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.659209 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-proxy-ca-bundles\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.659245 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhthj\" (UniqueName: \"kubernetes.io/projected/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-kube-api-access-bhthj\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.659283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-config\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.660678 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-proxy-ca-bundles\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.662361 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-config\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.662504 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-client-ca\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.666117 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-serving-cert\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.678291 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhthj\" (UniqueName: \"kubernetes.io/projected/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-kube-api-access-bhthj\") pod \"controller-manager-56d88bb98-2zgqh\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:43 crc kubenswrapper[4821]: I0309 18:27:43.873892 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.085689 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpr7q" event={"ID":"902f7680-4f21-43d9-9ca1-16e5746556a9","Type":"ContainerStarted","Data":"2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0"} Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.090682 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkk5" event={"ID":"423c9815-b133-45cf-bc0c-3f6291e1106b","Type":"ContainerStarted","Data":"095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b"} Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.106492 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kpr7q" podStartSLOduration=7.333334511 podStartE2EDuration="31.106476661s" podCreationTimestamp="2026-03-09 18:27:13 +0000 UTC" firstStartedPulling="2026-03-09 18:27:19.660614992 +0000 UTC m=+176.821990868" lastFinishedPulling="2026-03-09 18:27:43.433757162 +0000 UTC m=+200.595133018" observedRunningTime="2026-03-09 18:27:44.105717171 +0000 UTC m=+201.267093027" watchObservedRunningTime="2026-03-09 18:27:44.106476661 +0000 UTC m=+201.267852517" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.126793 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hzkk5" podStartSLOduration=8.248917884 podStartE2EDuration="31.126774715s" podCreationTimestamp="2026-03-09 18:27:13 +0000 UTC" firstStartedPulling="2026-03-09 18:27:20.826443877 +0000 UTC m=+177.987819753" lastFinishedPulling="2026-03-09 18:27:43.704300738 +0000 UTC m=+200.865676584" observedRunningTime="2026-03-09 18:27:44.126423307 +0000 UTC m=+201.287799163" watchObservedRunningTime="2026-03-09 18:27:44.126774715 +0000 UTC m=+201.288150571" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.277796 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56d88bb98-2zgqh"] Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.419287 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.424741 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.459666 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.478726 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e28616e-2c71-45e4-b93c-e76014c89d0d-kube-api-access\") pod \"7e28616e-2c71-45e4-b93c-e76014c89d0d\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.478788 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shxs8\" (UniqueName: \"kubernetes.io/projected/60628f60-1633-4b77-a457-762d204bab20-kube-api-access-shxs8\") pod \"60628f60-1633-4b77-a457-762d204bab20\" (UID: \"60628f60-1633-4b77-a457-762d204bab20\") " Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.478867 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e28616e-2c71-45e4-b93c-e76014c89d0d-kubelet-dir\") pod \"7e28616e-2c71-45e4-b93c-e76014c89d0d\" (UID: \"7e28616e-2c71-45e4-b93c-e76014c89d0d\") " Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.479197 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e28616e-2c71-45e4-b93c-e76014c89d0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e28616e-2c71-45e4-b93c-e76014c89d0d" (UID: "7e28616e-2c71-45e4-b93c-e76014c89d0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.484977 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e28616e-2c71-45e4-b93c-e76014c89d0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e28616e-2c71-45e4-b93c-e76014c89d0d" (UID: "7e28616e-2c71-45e4-b93c-e76014c89d0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.486691 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60628f60-1633-4b77-a457-762d204bab20-kube-api-access-shxs8" (OuterVolumeSpecName: "kube-api-access-shxs8") pod "60628f60-1633-4b77-a457-762d204bab20" (UID: "60628f60-1633-4b77-a457-762d204bab20"). InnerVolumeSpecName "kube-api-access-shxs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580293 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3d6bf4f-253c-4694-ba14-62abd7d74285-kube-api-access\") pod \"e3d6bf4f-253c-4694-ba14-62abd7d74285\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580431 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3d6bf4f-253c-4694-ba14-62abd7d74285-kubelet-dir\") pod \"e3d6bf4f-253c-4694-ba14-62abd7d74285\" (UID: \"e3d6bf4f-253c-4694-ba14-62abd7d74285\") " Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580573 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3d6bf4f-253c-4694-ba14-62abd7d74285-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e3d6bf4f-253c-4694-ba14-62abd7d74285" (UID: "e3d6bf4f-253c-4694-ba14-62abd7d74285"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580803 4821 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e28616e-2c71-45e4-b93c-e76014c89d0d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580818 4821 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3d6bf4f-253c-4694-ba14-62abd7d74285-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580831 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e28616e-2c71-45e4-b93c-e76014c89d0d-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.580845 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shxs8\" (UniqueName: \"kubernetes.io/projected/60628f60-1633-4b77-a457-762d204bab20-kube-api-access-shxs8\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.583241 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d6bf4f-253c-4694-ba14-62abd7d74285-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3d6bf4f-253c-4694-ba14-62abd7d74285" (UID: "e3d6bf4f-253c-4694-ba14-62abd7d74285"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:44 crc kubenswrapper[4821]: I0309 18:27:44.682597 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3d6bf4f-253c-4694-ba14-62abd7d74285-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.097686 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7e28616e-2c71-45e4-b93c-e76014c89d0d","Type":"ContainerDied","Data":"0878244dabde9e52ccc1fafc7be221710c222d9ce71b70df1e4d6fdbf1a656a1"} Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.097733 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0878244dabde9e52ccc1fafc7be221710c222d9ce71b70df1e4d6fdbf1a656a1" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.097777 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.100597 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" event={"ID":"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6","Type":"ContainerStarted","Data":"ff9c430db40bfc6a0e23cba6f079956040557034f12e1e8e17ad82bd900c63e6"} Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.100657 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" event={"ID":"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6","Type":"ContainerStarted","Data":"2e16a02ee85600d8679d71fa3b229a1a9bcb7ac71330623b0cab7ea650dfadef"} Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.103064 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e3d6bf4f-253c-4694-ba14-62abd7d74285","Type":"ContainerDied","Data":"2ef7997827c57a36c6bf1f782cdd5e0b951001cde25c0d0a8a88d0e192360ffb"} Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.103095 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef7997827c57a36c6bf1f782cdd5e0b951001cde25c0d0a8a88d0e192360ffb" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.103288 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.114492 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551346-phdwt" event={"ID":"60628f60-1633-4b77-a457-762d204bab20","Type":"ContainerDied","Data":"2cd28a7e88894e1093a6eb940b7133a2585d977e63afe939651cd9ee639f90db"} Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.114543 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd28a7e88894e1093a6eb940b7133a2585d977e63afe939651cd9ee639f90db" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.114677 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551346-phdwt" Mar 09 18:27:45 crc kubenswrapper[4821]: I0309 18:27:45.123188 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" podStartSLOduration=17.123141861 podStartE2EDuration="17.123141861s" podCreationTimestamp="2026-03-09 18:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:45.117614131 +0000 UTC m=+202.278989987" watchObservedRunningTime="2026-03-09 18:27:45.123141861 +0000 UTC m=+202.284517717" Mar 09 18:27:46 crc kubenswrapper[4821]: I0309 18:27:46.121570 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:46 crc kubenswrapper[4821]: I0309 18:27:46.126857 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:46 crc kubenswrapper[4821]: I0309 18:27:46.332747 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ft9v6" Mar 09 18:27:48 crc kubenswrapper[4821]: I0309 18:27:48.135236 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerStarted","Data":"f0b2053097b15c32518450f2985344b52abf82e4a52eddf39a9a70230c85d1f1"} Mar 09 18:27:48 crc kubenswrapper[4821]: I0309 18:27:48.137114 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerStarted","Data":"fb2aceb9d0a4a7e4213e2a4ddee561b3254153c25e68804624d692a0232af6a4"} Mar 09 18:27:48 crc kubenswrapper[4821]: I0309 18:27:48.498008 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56d88bb98-2zgqh"] Mar 09 18:27:48 crc kubenswrapper[4821]: I0309 18:27:48.586013 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb"] Mar 09 18:27:48 crc kubenswrapper[4821]: I0309 18:27:48.586295 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" podUID="e765b9fe-57ee-4233-8390-cdac364e3996" containerName="route-controller-manager" containerID="cri-o://657fff38264c5dd01213c37e50b95f87798741756cfe167c124cd28c4ae603ba" gracePeriod=30 Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.145091 4821 generic.go:334] "Generic (PLEG): container finished" podID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerID="fb2aceb9d0a4a7e4213e2a4ddee561b3254153c25e68804624d692a0232af6a4" exitCode=0 Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.145187 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerDied","Data":"fb2aceb9d0a4a7e4213e2a4ddee561b3254153c25e68804624d692a0232af6a4"} Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.148849 4821 generic.go:334] "Generic (PLEG): container finished" podID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerID="f0b2053097b15c32518450f2985344b52abf82e4a52eddf39a9a70230c85d1f1" exitCode=0 Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.149355 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerDied","Data":"f0b2053097b15c32518450f2985344b52abf82e4a52eddf39a9a70230c85d1f1"} Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.151304 4821 generic.go:334] "Generic (PLEG): container finished" podID="e765b9fe-57ee-4233-8390-cdac364e3996" containerID="657fff38264c5dd01213c37e50b95f87798741756cfe167c124cd28c4ae603ba" exitCode=0 Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.151585 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" podUID="4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" containerName="controller-manager" containerID="cri-o://ff9c430db40bfc6a0e23cba6f079956040557034f12e1e8e17ad82bd900c63e6" gracePeriod=30 Mar 09 18:27:49 crc kubenswrapper[4821]: I0309 18:27:49.151645 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" event={"ID":"e765b9fe-57ee-4233-8390-cdac364e3996","Type":"ContainerDied","Data":"657fff38264c5dd01213c37e50b95f87798741756cfe167c124cd28c4ae603ba"} Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.159971 4821 generic.go:334] "Generic (PLEG): container finished" podID="4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" containerID="ff9c430db40bfc6a0e23cba6f079956040557034f12e1e8e17ad82bd900c63e6" exitCode=0 Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.160011 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" event={"ID":"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6","Type":"ContainerDied","Data":"ff9c430db40bfc6a0e23cba6f079956040557034f12e1e8e17ad82bd900c63e6"} Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.160722 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 09 18:27:50 crc kubenswrapper[4821]: E0309 18:27:50.160958 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60628f60-1633-4b77-a457-762d204bab20" containerName="oc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.160971 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="60628f60-1633-4b77-a457-762d204bab20" containerName="oc" Mar 09 18:27:50 crc kubenswrapper[4821]: E0309 18:27:50.160989 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e28616e-2c71-45e4-b93c-e76014c89d0d" containerName="pruner" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.160997 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e28616e-2c71-45e4-b93c-e76014c89d0d" containerName="pruner" Mar 09 18:27:50 crc kubenswrapper[4821]: E0309 18:27:50.161017 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d6bf4f-253c-4694-ba14-62abd7d74285" containerName="pruner" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.161026 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d6bf4f-253c-4694-ba14-62abd7d74285" containerName="pruner" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.161143 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e28616e-2c71-45e4-b93c-e76014c89d0d" containerName="pruner" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.161156 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="60628f60-1633-4b77-a457-762d204bab20" containerName="oc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.161164 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d6bf4f-253c-4694-ba14-62abd7d74285" containerName="pruner" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.161528 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.163898 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.164222 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.168205 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.356731 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6987aec-8ee0-4026-99bb-a30b76e2b131-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.357101 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6987aec-8ee0-4026-99bb-a30b76e2b131-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.458168 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6987aec-8ee0-4026-99bb-a30b76e2b131-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.458273 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6987aec-8ee0-4026-99bb-a30b76e2b131-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.458400 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6987aec-8ee0-4026-99bb-a30b76e2b131-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.483724 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6987aec-8ee0-4026-99bb-a30b76e2b131-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:50 crc kubenswrapper[4821]: I0309 18:27:50.777419 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.580249 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.580585 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.580609 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.580632 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.585269 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.585491 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.585656 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.596406 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.597857 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.599397 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.609622 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.609970 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.672335 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.682938 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.692739 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.793895 4821 patch_prober.go:28] interesting pod/route-controller-manager-9b4545b7f-skgzb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Mar 09 18:27:51 crc kubenswrapper[4821]: I0309 18:27:51.793956 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" podUID="e765b9fe-57ee-4233-8390-cdac364e3996" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.677228 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 09 18:27:52 crc kubenswrapper[4821]: W0309 18:27:52.692993 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf6987aec_8ee0_4026_99bb_a30b76e2b131.slice/crio-8660136be2adefef7868461b33694b03346c908789cefabf5a0909ea04281084 WatchSource:0}: Error finding container 8660136be2adefef7868461b33694b03346c908789cefabf5a0909ea04281084: Status 404 returned error can't find the container with id 8660136be2adefef7868461b33694b03346c908789cefabf5a0909ea04281084 Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.800982 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.835666 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt"] Mar 09 18:27:52 crc kubenswrapper[4821]: E0309 18:27:52.835881 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e765b9fe-57ee-4233-8390-cdac364e3996" containerName="route-controller-manager" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.835892 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e765b9fe-57ee-4233-8390-cdac364e3996" containerName="route-controller-manager" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.836001 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e765b9fe-57ee-4233-8390-cdac364e3996" containerName="route-controller-manager" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.836281 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.868691 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt"] Mar 09 18:27:52 crc kubenswrapper[4821]: W0309 18:27:52.888938 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-4ea2fd9f91e5f4c9806396d588621098245fb2eb5b429f0ca245af550b1cb612 WatchSource:0}: Error finding container 4ea2fd9f91e5f4c9806396d588621098245fb2eb5b429f0ca245af550b1cb612: Status 404 returned error can't find the container with id 4ea2fd9f91e5f4c9806396d588621098245fb2eb5b429f0ca245af550b1cb612 Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.894205 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.896807 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e765b9fe-57ee-4233-8390-cdac364e3996-serving-cert\") pod \"e765b9fe-57ee-4233-8390-cdac364e3996\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.896910 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s2kg\" (UniqueName: \"kubernetes.io/projected/e765b9fe-57ee-4233-8390-cdac364e3996-kube-api-access-2s2kg\") pod \"e765b9fe-57ee-4233-8390-cdac364e3996\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.896955 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-config\") pod \"e765b9fe-57ee-4233-8390-cdac364e3996\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.896984 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-client-ca\") pod \"e765b9fe-57ee-4233-8390-cdac364e3996\" (UID: \"e765b9fe-57ee-4233-8390-cdac364e3996\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.897257 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqqd\" (UniqueName: \"kubernetes.io/projected/e0d3261a-aea5-4017-afba-76b8775df70e-kube-api-access-vgqqd\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.897295 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-client-ca\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.897356 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-config\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.897374 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3261a-aea5-4017-afba-76b8775df70e-serving-cert\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.898174 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-client-ca" (OuterVolumeSpecName: "client-ca") pod "e765b9fe-57ee-4233-8390-cdac364e3996" (UID: "e765b9fe-57ee-4233-8390-cdac364e3996"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.898276 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-config" (OuterVolumeSpecName: "config") pod "e765b9fe-57ee-4233-8390-cdac364e3996" (UID: "e765b9fe-57ee-4233-8390-cdac364e3996"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.905547 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e765b9fe-57ee-4233-8390-cdac364e3996-kube-api-access-2s2kg" (OuterVolumeSpecName: "kube-api-access-2s2kg") pod "e765b9fe-57ee-4233-8390-cdac364e3996" (UID: "e765b9fe-57ee-4233-8390-cdac364e3996"). InnerVolumeSpecName "kube-api-access-2s2kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.907923 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e765b9fe-57ee-4233-8390-cdac364e3996-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e765b9fe-57ee-4233-8390-cdac364e3996" (UID: "e765b9fe-57ee-4233-8390-cdac364e3996"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:52 crc kubenswrapper[4821]: W0309 18:27:52.939220 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d7cc4710b20664f1ed8188e47263668e989faefbfc5255b8cdf6b1ffd88b5114 WatchSource:0}: Error finding container d7cc4710b20664f1ed8188e47263668e989faefbfc5255b8cdf6b1ffd88b5114: Status 404 returned error can't find the container with id d7cc4710b20664f1ed8188e47263668e989faefbfc5255b8cdf6b1ffd88b5114 Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.997849 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-client-ca\") pod \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.997894 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-config\") pod \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.997942 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-serving-cert\") pod \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.997959 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-proxy-ca-bundles\") pod \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.997985 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhthj\" (UniqueName: \"kubernetes.io/projected/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-kube-api-access-bhthj\") pod \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\" (UID: \"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6\") " Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998167 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-config\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998185 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3261a-aea5-4017-afba-76b8775df70e-serving-cert\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998476 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqqd\" (UniqueName: \"kubernetes.io/projected/e0d3261a-aea5-4017-afba-76b8775df70e-kube-api-access-vgqqd\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998509 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-client-ca\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998532 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-client-ca" (OuterVolumeSpecName: "client-ca") pod "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" (UID: "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998558 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e765b9fe-57ee-4233-8390-cdac364e3996-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998571 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s2kg\" (UniqueName: \"kubernetes.io/projected/e765b9fe-57ee-4233-8390-cdac364e3996-kube-api-access-2s2kg\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998582 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.998590 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e765b9fe-57ee-4233-8390-cdac364e3996-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.999358 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-client-ca\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:52 crc kubenswrapper[4821]: I0309 18:27:52.999386 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" (UID: "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.000501 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-config\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.002606 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-config" (OuterVolumeSpecName: "config") pod "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" (UID: "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.004584 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-kube-api-access-bhthj" (OuterVolumeSpecName: "kube-api-access-bhthj") pod "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" (UID: "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6"). InnerVolumeSpecName "kube-api-access-bhthj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.005029 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3261a-aea5-4017-afba-76b8775df70e-serving-cert\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.005432 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" (UID: "4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.016070 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqqd\" (UniqueName: \"kubernetes.io/projected/e0d3261a-aea5-4017-afba-76b8775df70e-kube-api-access-vgqqd\") pod \"route-controller-manager-749b48f564-2qmzt\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.099346 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.099380 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.099392 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.099403 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.099417 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhthj\" (UniqueName: \"kubernetes.io/projected/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6-kube-api-access-bhthj\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.186126 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f6987aec-8ee0-4026-99bb-a30b76e2b131","Type":"ContainerStarted","Data":"f6dd9fd1292e3f0c6e64bad6217bb82cd6f972ab3f687b7fa171febb3fc6777e"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.186186 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f6987aec-8ee0-4026-99bb-a30b76e2b131","Type":"ContainerStarted","Data":"8660136be2adefef7868461b33694b03346c908789cefabf5a0909ea04281084"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.192638 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" event={"ID":"e765b9fe-57ee-4233-8390-cdac364e3996","Type":"ContainerDied","Data":"a52f75d80a2410cd38a939e8c435efa9a3e31507d218b84112396de2d8d27195"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.192704 4821 scope.go:117] "RemoveContainer" containerID="657fff38264c5dd01213c37e50b95f87798741756cfe167c124cd28c4ae603ba" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.192763 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.202260 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"983e951f7636886a577589c25dca0d5e434145126b2fe338e368e319bde36ce4"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.202307 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"74d23b0b39a07df3dbb1aa7454d17db20fa2e06e1d076f3ff8f0284a18fa1dde"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.202619 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.202600541 podStartE2EDuration="3.202600541s" podCreationTimestamp="2026-03-09 18:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:53.202293693 +0000 UTC m=+210.363669559" watchObservedRunningTime="2026-03-09 18:27:53.202600541 +0000 UTC m=+210.363976407" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.209717 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"119e4e139b1d21737af5708ad5662aeaf5fb30550c2c97bb56c73180beb117d1"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.209763 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4ea2fd9f91e5f4c9806396d588621098245fb2eb5b429f0ca245af550b1cb612"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.212717 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerStarted","Data":"a959fd964c95d575bc8de56dfa58e33cec163f220afab1d11923747c61ac1025"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.224398 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerStarted","Data":"7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.224987 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.236585 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d7cc4710b20664f1ed8188e47263668e989faefbfc5255b8cdf6b1ffd88b5114"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.237259 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.239843 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerStarted","Data":"fce674e345edd4d53080b2deda98e14f2fd4acb9e4b04b4e33ede35073c3390a"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.246805 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" event={"ID":"4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6","Type":"ContainerDied","Data":"2e16a02ee85600d8679d71fa3b229a1a9bcb7ac71330623b0cab7ea650dfadef"} Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.246854 4821 scope.go:117] "RemoveContainer" containerID="ff9c430db40bfc6a0e23cba6f079956040557034f12e1e8e17ad82bd900c63e6" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.246976 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d88bb98-2zgqh" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.254775 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nsq8f" podStartSLOduration=28.293115648 podStartE2EDuration="39.254754282s" podCreationTimestamp="2026-03-09 18:27:14 +0000 UTC" firstStartedPulling="2026-03-09 18:27:41.999486077 +0000 UTC m=+199.160861933" lastFinishedPulling="2026-03-09 18:27:52.961124711 +0000 UTC m=+210.122500567" observedRunningTime="2026-03-09 18:27:53.249640414 +0000 UTC m=+210.411016270" watchObservedRunningTime="2026-03-09 18:27:53.254754282 +0000 UTC m=+210.416130138" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.294053 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb"] Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.302058 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b4545b7f-skgzb"] Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.385828 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5npmr" podStartSLOduration=27.457708061 podStartE2EDuration="38.385805864s" podCreationTimestamp="2026-03-09 18:27:15 +0000 UTC" firstStartedPulling="2026-03-09 18:27:42.00160216 +0000 UTC m=+199.162978016" lastFinishedPulling="2026-03-09 18:27:52.929699963 +0000 UTC m=+210.091075819" observedRunningTime="2026-03-09 18:27:53.383590598 +0000 UTC m=+210.544966454" watchObservedRunningTime="2026-03-09 18:27:53.385805864 +0000 UTC m=+210.547181720" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.404712 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56d88bb98-2zgqh"] Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.420508 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56d88bb98-2zgqh"] Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.492538 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt"] Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.540972 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.541014 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.564463 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" path="/var/lib/kubelet/pods/4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6/volumes" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.564980 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e765b9fe-57ee-4233-8390-cdac364e3996" path="/var/lib/kubelet/pods/e765b9fe-57ee-4233-8390-cdac364e3996/volumes" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.704939 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:53 crc kubenswrapper[4821]: I0309 18:27:53.704992 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.143475 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.148893 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.254512 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"347d126ade4713a8d54219c7cc2cb45b817d9bbee9a31000af0eaf19beb8f7ec"} Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.256502 4821 generic.go:334] "Generic (PLEG): container finished" podID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerID="7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490" exitCode=0 Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.256638 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerDied","Data":"7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490"} Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.267536 4821 generic.go:334] "Generic (PLEG): container finished" podID="faa3533b-267b-44a9-b949-af82368bf7e3" containerID="5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734" exitCode=0 Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.267645 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbkl7" event={"ID":"faa3533b-267b-44a9-b949-af82368bf7e3","Type":"ContainerDied","Data":"5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734"} Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.274202 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" event={"ID":"e0d3261a-aea5-4017-afba-76b8775df70e","Type":"ContainerStarted","Data":"a35154d27d9ba81553158aa518b754ad83f7392acc5ba8938b6d093a4163554d"} Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.334588 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.348250 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:54 crc kubenswrapper[4821]: I0309 18:27:54.386045 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tnl4x"] Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.278944 4821 generic.go:334] "Generic (PLEG): container finished" podID="f6987aec-8ee0-4026-99bb-a30b76e2b131" containerID="f6dd9fd1292e3f0c6e64bad6217bb82cd6f972ab3f687b7fa171febb3fc6777e" exitCode=0 Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.279045 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f6987aec-8ee0-4026-99bb-a30b76e2b131","Type":"ContainerDied","Data":"f6dd9fd1292e3f0c6e64bad6217bb82cd6f972ab3f687b7fa171febb3fc6777e"} Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.284044 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" event={"ID":"e0d3261a-aea5-4017-afba-76b8775df70e","Type":"ContainerStarted","Data":"ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341"} Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.301515 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.302573 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.319681 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" podStartSLOduration=7.319658862 podStartE2EDuration="7.319658862s" podCreationTimestamp="2026-03-09 18:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:55.314649495 +0000 UTC m=+212.476025351" watchObservedRunningTime="2026-03-09 18:27:55.319658862 +0000 UTC m=+212.481034718" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.544770 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cc864f55c-rpscs"] Mar 09 18:27:55 crc kubenswrapper[4821]: E0309 18:27:55.545022 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" containerName="controller-manager" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.545065 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" containerName="controller-manager" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.545187 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2d8c4d-e6f1-4a2f-a6e5-21b5faa408b6" containerName="controller-manager" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.545636 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.548199 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.548401 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.548444 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.549962 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.550075 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.550176 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.559214 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.575768 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc864f55c-rpscs"] Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.639389 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-config\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.639738 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-proxy-ca-bundles\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.639780 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqsl5\" (UniqueName: \"kubernetes.io/projected/75ac241a-633f-43fc-9d47-a05cba7054a1-kube-api-access-kqsl5\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.639856 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac241a-633f-43fc-9d47-a05cba7054a1-serving-cert\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.639881 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-client-ca\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.720593 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.720659 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.740643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-config\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.740687 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-proxy-ca-bundles\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.740710 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqsl5\" (UniqueName: \"kubernetes.io/projected/75ac241a-633f-43fc-9d47-a05cba7054a1-kube-api-access-kqsl5\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.740746 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac241a-633f-43fc-9d47-a05cba7054a1-serving-cert\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.740766 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-client-ca\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.741846 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-client-ca\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.742215 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-proxy-ca-bundles\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.742581 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-config\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.751539 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac241a-633f-43fc-9d47-a05cba7054a1-serving-cert\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.761710 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.775979 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqsl5\" (UniqueName: \"kubernetes.io/projected/75ac241a-633f-43fc-9d47-a05cba7054a1-kube-api-access-kqsl5\") pod \"controller-manager-5cc864f55c-rpscs\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.861187 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:55 crc kubenswrapper[4821]: I0309 18:27:55.862341 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kpr7q"] Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.161314 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.162237 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.181743 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.246019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.246056 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-var-lock\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.246102 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.290899 4821 generic.go:334] "Generic (PLEG): container finished" podID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerID="25d0aba4e52a42db77d899b2e7643b0a5d1273a72079b5d03e364bf9e3db4813" exitCode=0 Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.291111 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kpr7q" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="registry-server" containerID="cri-o://2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0" gracePeriod=2 Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.291368 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk4bg" event={"ID":"07a1db8f-6912-4ff8-9943-24c334031dfb","Type":"ContainerDied","Data":"25d0aba4e52a42db77d899b2e7643b0a5d1273a72079b5d03e364bf9e3db4813"} Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.294932 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.299370 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.301819 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cc864f55c-rpscs"] Mar 09 18:27:56 crc kubenswrapper[4821]: W0309 18:27:56.313582 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75ac241a_633f_43fc_9d47_a05cba7054a1.slice/crio-05242cbc617b6f2082618b077a2b88d7cb6e9adbf20a261e48c3cb614abb91ad WatchSource:0}: Error finding container 05242cbc617b6f2082618b077a2b88d7cb6e9adbf20a261e48c3cb614abb91ad: Status 404 returned error can't find the container with id 05242cbc617b6f2082618b077a2b88d7cb6e9adbf20a261e48c3cb614abb91ad Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.353591 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.353720 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.353740 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-var-lock\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.354976 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.355010 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-var-lock\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.357813 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nsq8f" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="registry-server" probeResult="failure" output=< Mar 09 18:27:56 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 18:27:56 crc kubenswrapper[4821]: > Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.382891 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kube-api-access\") pod \"installer-9-crc\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.501951 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.569604 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.657171 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6987aec-8ee0-4026-99bb-a30b76e2b131-kubelet-dir\") pod \"f6987aec-8ee0-4026-99bb-a30b76e2b131\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.657338 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6987aec-8ee0-4026-99bb-a30b76e2b131-kube-api-access\") pod \"f6987aec-8ee0-4026-99bb-a30b76e2b131\" (UID: \"f6987aec-8ee0-4026-99bb-a30b76e2b131\") " Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.657379 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6987aec-8ee0-4026-99bb-a30b76e2b131-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6987aec-8ee0-4026-99bb-a30b76e2b131" (UID: "f6987aec-8ee0-4026-99bb-a30b76e2b131"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.657592 4821 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6987aec-8ee0-4026-99bb-a30b76e2b131-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.662781 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6987aec-8ee0-4026-99bb-a30b76e2b131-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6987aec-8ee0-4026-99bb-a30b76e2b131" (UID: "f6987aec-8ee0-4026-99bb-a30b76e2b131"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:56 crc kubenswrapper[4821]: I0309 18:27:56.759037 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6987aec-8ee0-4026-99bb-a30b76e2b131-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.261570 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzkk5"] Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.298051 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f6987aec-8ee0-4026-99bb-a30b76e2b131","Type":"ContainerDied","Data":"8660136be2adefef7868461b33694b03346c908789cefabf5a0909ea04281084"} Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.298091 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8660136be2adefef7868461b33694b03346c908789cefabf5a0909ea04281084" Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.298145 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.308193 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbkl7" event={"ID":"faa3533b-267b-44a9-b949-af82368bf7e3","Type":"ContainerStarted","Data":"32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862"} Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.310315 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" event={"ID":"75ac241a-633f-43fc-9d47-a05cba7054a1","Type":"ContainerStarted","Data":"030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6"} Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.310639 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" event={"ID":"75ac241a-633f-43fc-9d47-a05cba7054a1","Type":"ContainerStarted","Data":"05242cbc617b6f2082618b077a2b88d7cb6e9adbf20a261e48c3cb614abb91ad"} Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.310453 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hzkk5" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="registry-server" containerID="cri-o://095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b" gracePeriod=2 Mar 09 18:27:57 crc kubenswrapper[4821]: I0309 18:27:57.327578 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pbkl7" podStartSLOduration=27.349668049 podStartE2EDuration="41.327557426s" podCreationTimestamp="2026-03-09 18:27:16 +0000 UTC" firstStartedPulling="2026-03-09 18:27:41.990601751 +0000 UTC m=+199.151977607" lastFinishedPulling="2026-03-09 18:27:55.968491128 +0000 UTC m=+213.129866984" observedRunningTime="2026-03-09 18:27:57.324875258 +0000 UTC m=+214.486251124" watchObservedRunningTime="2026-03-09 18:27:57.327557426 +0000 UTC m=+214.488933282" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.112664 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 09 18:27:58 crc kubenswrapper[4821]: W0309 18:27:58.215764 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddcdc187f_6e3b_442c_80a1_e404ee5ebb9e.slice/crio-29e0ea09e04388cefcffb1b33bc8f9aa36c53c591aa7d96c8b21aa75d8930a2c WatchSource:0}: Error finding container 29e0ea09e04388cefcffb1b33bc8f9aa36c53c591aa7d96c8b21aa75d8930a2c: Status 404 returned error can't find the container with id 29e0ea09e04388cefcffb1b33bc8f9aa36c53c591aa7d96c8b21aa75d8930a2c Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.279925 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.285706 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.320989 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e","Type":"ContainerStarted","Data":"29e0ea09e04388cefcffb1b33bc8f9aa36c53c591aa7d96c8b21aa75d8930a2c"} Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.323472 4821 generic.go:334] "Generic (PLEG): container finished" podID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerID="2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0" exitCode=0 Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.323521 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpr7q" event={"ID":"902f7680-4f21-43d9-9ca1-16e5746556a9","Type":"ContainerDied","Data":"2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0"} Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.323574 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kpr7q" event={"ID":"902f7680-4f21-43d9-9ca1-16e5746556a9","Type":"ContainerDied","Data":"d3fe2292d0664f73b1e880a97e12b57b84ed5d6a3204167411877b8cb86bbd1c"} Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.323592 4821 scope.go:117] "RemoveContainer" containerID="2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.323650 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kpr7q" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.327478 4821 generic.go:334] "Generic (PLEG): container finished" podID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerID="095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b" exitCode=0 Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.327569 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hzkk5" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.327660 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkk5" event={"ID":"423c9815-b133-45cf-bc0c-3f6291e1106b","Type":"ContainerDied","Data":"095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b"} Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.327688 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hzkk5" event={"ID":"423c9815-b133-45cf-bc0c-3f6291e1106b","Type":"ContainerDied","Data":"3dc54da9f889046a75ab954289b193c3cdc600bccb9b3e7e9ac119baef89a18f"} Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.328179 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.335341 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.346313 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" podStartSLOduration=10.346291598 podStartE2EDuration="10.346291598s" podCreationTimestamp="2026-03-09 18:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:58.340920162 +0000 UTC m=+215.502296018" watchObservedRunningTime="2026-03-09 18:27:58.346291598 +0000 UTC m=+215.507667454" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.377725 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-catalog-content\") pod \"902f7680-4f21-43d9-9ca1-16e5746556a9\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.377766 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtsrd\" (UniqueName: \"kubernetes.io/projected/902f7680-4f21-43d9-9ca1-16e5746556a9-kube-api-access-xtsrd\") pod \"902f7680-4f21-43d9-9ca1-16e5746556a9\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.377806 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq94p\" (UniqueName: \"kubernetes.io/projected/423c9815-b133-45cf-bc0c-3f6291e1106b-kube-api-access-lq94p\") pod \"423c9815-b133-45cf-bc0c-3f6291e1106b\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.377847 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-utilities\") pod \"423c9815-b133-45cf-bc0c-3f6291e1106b\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.377873 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-utilities\") pod \"902f7680-4f21-43d9-9ca1-16e5746556a9\" (UID: \"902f7680-4f21-43d9-9ca1-16e5746556a9\") " Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.377896 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-catalog-content\") pod \"423c9815-b133-45cf-bc0c-3f6291e1106b\" (UID: \"423c9815-b133-45cf-bc0c-3f6291e1106b\") " Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.381279 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-utilities" (OuterVolumeSpecName: "utilities") pod "423c9815-b133-45cf-bc0c-3f6291e1106b" (UID: "423c9815-b133-45cf-bc0c-3f6291e1106b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.382065 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-utilities" (OuterVolumeSpecName: "utilities") pod "902f7680-4f21-43d9-9ca1-16e5746556a9" (UID: "902f7680-4f21-43d9-9ca1-16e5746556a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.386592 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/902f7680-4f21-43d9-9ca1-16e5746556a9-kube-api-access-xtsrd" (OuterVolumeSpecName: "kube-api-access-xtsrd") pod "902f7680-4f21-43d9-9ca1-16e5746556a9" (UID: "902f7680-4f21-43d9-9ca1-16e5746556a9"). InnerVolumeSpecName "kube-api-access-xtsrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.387336 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423c9815-b133-45cf-bc0c-3f6291e1106b-kube-api-access-lq94p" (OuterVolumeSpecName: "kube-api-access-lq94p") pod "423c9815-b133-45cf-bc0c-3f6291e1106b" (UID: "423c9815-b133-45cf-bc0c-3f6291e1106b"). InnerVolumeSpecName "kube-api-access-lq94p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.441930 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "902f7680-4f21-43d9-9ca1-16e5746556a9" (UID: "902f7680-4f21-43d9-9ca1-16e5746556a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.449398 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "423c9815-b133-45cf-bc0c-3f6291e1106b" (UID: "423c9815-b133-45cf-bc0c-3f6291e1106b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.474346 4821 scope.go:117] "RemoveContainer" containerID="baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.479029 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq94p\" (UniqueName: \"kubernetes.io/projected/423c9815-b133-45cf-bc0c-3f6291e1106b-kube-api-access-lq94p\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.479070 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.479091 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.479109 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/423c9815-b133-45cf-bc0c-3f6291e1106b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.479126 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/902f7680-4f21-43d9-9ca1-16e5746556a9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.479143 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtsrd\" (UniqueName: \"kubernetes.io/projected/902f7680-4f21-43d9-9ca1-16e5746556a9-kube-api-access-xtsrd\") on node \"crc\" DevicePath \"\"" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.497860 4821 scope.go:117] "RemoveContainer" containerID="bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.523660 4821 scope.go:117] "RemoveContainer" containerID="2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0" Mar 09 18:27:58 crc kubenswrapper[4821]: E0309 18:27:58.524038 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0\": container with ID starting with 2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0 not found: ID does not exist" containerID="2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.524088 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0"} err="failed to get container status \"2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0\": rpc error: code = NotFound desc = could not find container \"2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0\": container with ID starting with 2071b8a694c9ff6e9a3c689db0c5187ac216cc6f577b316838e69c6a56b4ecb0 not found: ID does not exist" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.524112 4821 scope.go:117] "RemoveContainer" containerID="baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5" Mar 09 18:27:58 crc kubenswrapper[4821]: E0309 18:27:58.524459 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5\": container with ID starting with baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5 not found: ID does not exist" containerID="baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.524491 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5"} err="failed to get container status \"baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5\": rpc error: code = NotFound desc = could not find container \"baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5\": container with ID starting with baf0c572b11e0d45de73d944f9cfa6bc1f8efd9d37b2ff8b54be5434bc22eda5 not found: ID does not exist" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.524508 4821 scope.go:117] "RemoveContainer" containerID="bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b" Mar 09 18:27:58 crc kubenswrapper[4821]: E0309 18:27:58.524863 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b\": container with ID starting with bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b not found: ID does not exist" containerID="bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.524915 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b"} err="failed to get container status \"bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b\": rpc error: code = NotFound desc = could not find container \"bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b\": container with ID starting with bf39a453006e0d4fd0172c6acf8b72428598d25b18f4fe39a12991a74266e90b not found: ID does not exist" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.524949 4821 scope.go:117] "RemoveContainer" containerID="095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.555913 4821 scope.go:117] "RemoveContainer" containerID="c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.576746 4821 scope.go:117] "RemoveContainer" containerID="a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.591663 4821 scope.go:117] "RemoveContainer" containerID="095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b" Mar 09 18:27:58 crc kubenswrapper[4821]: E0309 18:27:58.592005 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b\": container with ID starting with 095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b not found: ID does not exist" containerID="095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.592040 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b"} err="failed to get container status \"095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b\": rpc error: code = NotFound desc = could not find container \"095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b\": container with ID starting with 095021bf84ac73ae16adc2720a06221e90ee4d86a09ca03b70851d881fd5986b not found: ID does not exist" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.592062 4821 scope.go:117] "RemoveContainer" containerID="c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51" Mar 09 18:27:58 crc kubenswrapper[4821]: E0309 18:27:58.592506 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51\": container with ID starting with c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51 not found: ID does not exist" containerID="c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.592541 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51"} err="failed to get container status \"c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51\": rpc error: code = NotFound desc = could not find container \"c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51\": container with ID starting with c5147cf6598dbd52eee56232c1c481bdb9ab91922bd423ba39b0b166de34ba51 not found: ID does not exist" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.592565 4821 scope.go:117] "RemoveContainer" containerID="a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4" Mar 09 18:27:58 crc kubenswrapper[4821]: E0309 18:27:58.593030 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4\": container with ID starting with a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4 not found: ID does not exist" containerID="a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.593056 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4"} err="failed to get container status \"a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4\": rpc error: code = NotFound desc = could not find container \"a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4\": container with ID starting with a82fd39106e2a9fab8d6a4d81d8a0ef8abdce6ad0d17bf0e233d18efb0f05cd4 not found: ID does not exist" Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.650862 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kpr7q"] Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.653277 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kpr7q"] Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.685544 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hzkk5"] Mar 09 18:27:58 crc kubenswrapper[4821]: I0309 18:27:58.690609 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hzkk5"] Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.335740 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerStarted","Data":"6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec"} Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.339210 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk4bg" event={"ID":"07a1db8f-6912-4ff8-9943-24c334031dfb","Type":"ContainerStarted","Data":"1f8a2551adf9a2ac55ee995478e47d05b3f65844e7b0819cb317bce8bb52574a"} Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.345353 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e","Type":"ContainerStarted","Data":"81af810d180058fde4f30bd8a77b3749ecea989c43f690bf25e1b25dc74b8eee"} Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.356136 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerStarted","Data":"5416fb9adace19d77676dd6d3d578f796a5ec056337d1863a5cbf731739e138b"} Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.362088 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2h8qw" podStartSLOduration=28.460952741 podStartE2EDuration="44.362067665s" podCreationTimestamp="2026-03-09 18:27:15 +0000 UTC" firstStartedPulling="2026-03-09 18:27:41.948144945 +0000 UTC m=+199.109520811" lastFinishedPulling="2026-03-09 18:27:57.849259879 +0000 UTC m=+215.010635735" observedRunningTime="2026-03-09 18:27:59.359432879 +0000 UTC m=+216.520808725" watchObservedRunningTime="2026-03-09 18:27:59.362067665 +0000 UTC m=+216.523443521" Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.404416 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nk4bg" podStartSLOduration=2.580951332 podStartE2EDuration="47.404400528s" podCreationTimestamp="2026-03-09 18:27:12 +0000 UTC" firstStartedPulling="2026-03-09 18:27:13.650258681 +0000 UTC m=+170.811634537" lastFinishedPulling="2026-03-09 18:27:58.473707847 +0000 UTC m=+215.635083733" observedRunningTime="2026-03-09 18:27:59.382479153 +0000 UTC m=+216.543855029" watchObservedRunningTime="2026-03-09 18:27:59.404400528 +0000 UTC m=+216.565776384" Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.557672 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" path="/var/lib/kubelet/pods/423c9815-b133-45cf-bc0c-3f6291e1106b/volumes" Mar 09 18:27:59 crc kubenswrapper[4821]: I0309 18:27:59.558254 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" path="/var/lib/kubelet/pods/902f7680-4f21-43d9-9ca1-16e5746556a9/volumes" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.134592 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.134574736 podStartE2EDuration="4.134574736s" podCreationTimestamp="2026-03-09 18:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:27:59.426564159 +0000 UTC m=+216.587940015" watchObservedRunningTime="2026-03-09 18:28:00.134574736 +0000 UTC m=+217.295950592" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136049 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551348-txx6v"] Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136229 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6987aec-8ee0-4026-99bb-a30b76e2b131" containerName="pruner" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136245 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6987aec-8ee0-4026-99bb-a30b76e2b131" containerName="pruner" Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136254 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="extract-content" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136261 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="extract-content" Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136275 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="registry-server" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136280 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="registry-server" Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136289 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="extract-utilities" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136294 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="extract-utilities" Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136304 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="extract-content" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136310 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="extract-content" Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136343 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="registry-server" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136349 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="registry-server" Mar 09 18:28:00 crc kubenswrapper[4821]: E0309 18:28:00.136358 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="extract-utilities" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136363 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="extract-utilities" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136448 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="423c9815-b133-45cf-bc0c-3f6291e1106b" containerName="registry-server" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136461 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="902f7680-4f21-43d9-9ca1-16e5746556a9" containerName="registry-server" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136469 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6987aec-8ee0-4026-99bb-a30b76e2b131" containerName="pruner" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.136832 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.139850 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.140085 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.140928 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.150064 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551348-txx6v"] Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.205505 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766cc\" (UniqueName: \"kubernetes.io/projected/d058c9b7-152c-49b8-9bbb-0681920dd243-kube-api-access-766cc\") pod \"auto-csr-approver-29551348-txx6v\" (UID: \"d058c9b7-152c-49b8-9bbb-0681920dd243\") " pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.306684 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766cc\" (UniqueName: \"kubernetes.io/projected/d058c9b7-152c-49b8-9bbb-0681920dd243-kube-api-access-766cc\") pod \"auto-csr-approver-29551348-txx6v\" (UID: \"d058c9b7-152c-49b8-9bbb-0681920dd243\") " pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.345351 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766cc\" (UniqueName: \"kubernetes.io/projected/d058c9b7-152c-49b8-9bbb-0681920dd243-kube-api-access-766cc\") pod \"auto-csr-approver-29551348-txx6v\" (UID: \"d058c9b7-152c-49b8-9bbb-0681920dd243\") " pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.368334 4821 generic.go:334] "Generic (PLEG): container finished" podID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerID="5416fb9adace19d77676dd6d3d578f796a5ec056337d1863a5cbf731739e138b" exitCode=0 Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.368936 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerDied","Data":"5416fb9adace19d77676dd6d3d578f796a5ec056337d1863a5cbf731739e138b"} Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.449464 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:00 crc kubenswrapper[4821]: I0309 18:28:00.872547 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551348-txx6v"] Mar 09 18:28:00 crc kubenswrapper[4821]: W0309 18:28:00.884495 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd058c9b7_152c_49b8_9bbb_0681920dd243.slice/crio-54ad2c19ec0a28aca41819c6c753d8a9ffb627ce655de3332b53f709eb343b6e WatchSource:0}: Error finding container 54ad2c19ec0a28aca41819c6c753d8a9ffb627ce655de3332b53f709eb343b6e: Status 404 returned error can't find the container with id 54ad2c19ec0a28aca41819c6c753d8a9ffb627ce655de3332b53f709eb343b6e Mar 09 18:28:01 crc kubenswrapper[4821]: I0309 18:28:01.382109 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551348-txx6v" event={"ID":"d058c9b7-152c-49b8-9bbb-0681920dd243","Type":"ContainerStarted","Data":"54ad2c19ec0a28aca41819c6c753d8a9ffb627ce655de3332b53f709eb343b6e"} Mar 09 18:28:02 crc kubenswrapper[4821]: I0309 18:28:02.390775 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerStarted","Data":"d4679044f8495b36e6b6667a3a6958878fb44b68f17fcefb12eaaf574ef27150"} Mar 09 18:28:02 crc kubenswrapper[4821]: I0309 18:28:02.413923 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sn8zk" podStartSLOduration=8.82779852 podStartE2EDuration="50.41390677s" podCreationTimestamp="2026-03-09 18:27:12 +0000 UTC" firstStartedPulling="2026-03-09 18:27:19.66119882 +0000 UTC m=+176.822574686" lastFinishedPulling="2026-03-09 18:28:01.24730707 +0000 UTC m=+218.408682936" observedRunningTime="2026-03-09 18:28:02.411984381 +0000 UTC m=+219.573360237" watchObservedRunningTime="2026-03-09 18:28:02.41390677 +0000 UTC m=+219.575282626" Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.109273 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.109351 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.162983 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.331990 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.332077 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.397903 4821 generic.go:334] "Generic (PLEG): container finished" podID="d058c9b7-152c-49b8-9bbb-0681920dd243" containerID="899e6f76e7aa4815ed1827f6927d363d5354ccc7f85517dc20d197ef68ddf545" exitCode=0 Mar 09 18:28:03 crc kubenswrapper[4821]: I0309 18:28:03.398262 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551348-txx6v" event={"ID":"d058c9b7-152c-49b8-9bbb-0681920dd243","Type":"ContainerDied","Data":"899e6f76e7aa4815ed1827f6927d363d5354ccc7f85517dc20d197ef68ddf545"} Mar 09 18:28:04 crc kubenswrapper[4821]: I0309 18:28:04.397981 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sn8zk" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="registry-server" probeResult="failure" output=< Mar 09 18:28:04 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 18:28:04 crc kubenswrapper[4821]: > Mar 09 18:28:04 crc kubenswrapper[4821]: I0309 18:28:04.797069 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:04 crc kubenswrapper[4821]: I0309 18:28:04.892268 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-766cc\" (UniqueName: \"kubernetes.io/projected/d058c9b7-152c-49b8-9bbb-0681920dd243-kube-api-access-766cc\") pod \"d058c9b7-152c-49b8-9bbb-0681920dd243\" (UID: \"d058c9b7-152c-49b8-9bbb-0681920dd243\") " Mar 09 18:28:04 crc kubenswrapper[4821]: I0309 18:28:04.905552 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d058c9b7-152c-49b8-9bbb-0681920dd243-kube-api-access-766cc" (OuterVolumeSpecName: "kube-api-access-766cc") pod "d058c9b7-152c-49b8-9bbb-0681920dd243" (UID: "d058c9b7-152c-49b8-9bbb-0681920dd243"). InnerVolumeSpecName "kube-api-access-766cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:04 crc kubenswrapper[4821]: I0309 18:28:04.993563 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-766cc\" (UniqueName: \"kubernetes.io/projected/d058c9b7-152c-49b8-9bbb-0681920dd243-kube-api-access-766cc\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:05 crc kubenswrapper[4821]: I0309 18:28:05.365783 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:28:05 crc kubenswrapper[4821]: I0309 18:28:05.416745 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551348-txx6v" event={"ID":"d058c9b7-152c-49b8-9bbb-0681920dd243","Type":"ContainerDied","Data":"54ad2c19ec0a28aca41819c6c753d8a9ffb627ce655de3332b53f709eb343b6e"} Mar 09 18:28:05 crc kubenswrapper[4821]: I0309 18:28:05.416794 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551348-txx6v" Mar 09 18:28:05 crc kubenswrapper[4821]: I0309 18:28:05.416812 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ad2c19ec0a28aca41819c6c753d8a9ffb627ce655de3332b53f709eb343b6e" Mar 09 18:28:05 crc kubenswrapper[4821]: I0309 18:28:05.436847 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:28:05 crc kubenswrapper[4821]: I0309 18:28:05.792462 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.315450 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.315514 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.386387 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.501635 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.510612 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.510676 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:28:06 crc kubenswrapper[4821]: I0309 18:28:06.583246 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.264182 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npmr"] Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.264471 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5npmr" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="registry-server" containerID="cri-o://fce674e345edd4d53080b2deda98e14f2fd4acb9e4b04b4e33ede35073c3390a" gracePeriod=2 Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.430444 4821 generic.go:334] "Generic (PLEG): container finished" podID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerID="fce674e345edd4d53080b2deda98e14f2fd4acb9e4b04b4e33ede35073c3390a" exitCode=0 Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.430528 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerDied","Data":"fce674e345edd4d53080b2deda98e14f2fd4acb9e4b04b4e33ede35073c3390a"} Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.500799 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.809529 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.933990 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-utilities\") pod \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.934051 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdh6\" (UniqueName: \"kubernetes.io/projected/1ff9182f-57eb-4efa-b7c3-ae63d66457df-kube-api-access-lxdh6\") pod \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.934079 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-catalog-content\") pod \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\" (UID: \"1ff9182f-57eb-4efa-b7c3-ae63d66457df\") " Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.935649 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-utilities" (OuterVolumeSpecName: "utilities") pod "1ff9182f-57eb-4efa-b7c3-ae63d66457df" (UID: "1ff9182f-57eb-4efa-b7c3-ae63d66457df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.951096 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff9182f-57eb-4efa-b7c3-ae63d66457df-kube-api-access-lxdh6" (OuterVolumeSpecName: "kube-api-access-lxdh6") pod "1ff9182f-57eb-4efa-b7c3-ae63d66457df" (UID: "1ff9182f-57eb-4efa-b7c3-ae63d66457df"). InnerVolumeSpecName "kube-api-access-lxdh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:07 crc kubenswrapper[4821]: I0309 18:28:07.975687 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ff9182f-57eb-4efa-b7c3-ae63d66457df" (UID: "1ff9182f-57eb-4efa-b7c3-ae63d66457df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.035597 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.035636 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxdh6\" (UniqueName: \"kubernetes.io/projected/1ff9182f-57eb-4efa-b7c3-ae63d66457df-kube-api-access-lxdh6\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.035649 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ff9182f-57eb-4efa-b7c3-ae63d66457df-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.441165 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5npmr" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.441696 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5npmr" event={"ID":"1ff9182f-57eb-4efa-b7c3-ae63d66457df","Type":"ContainerDied","Data":"edb3a54553a03a37ca0248c59a99e7b74754fef2d58951418fa734884937c51b"} Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.441753 4821 scope.go:117] "RemoveContainer" containerID="fce674e345edd4d53080b2deda98e14f2fd4acb9e4b04b4e33ede35073c3390a" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.467210 4821 scope.go:117] "RemoveContainer" containerID="f0b2053097b15c32518450f2985344b52abf82e4a52eddf39a9a70230c85d1f1" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.514405 4821 scope.go:117] "RemoveContainer" containerID="8e8899a0cef2f57514f2cefb5c9157da54538a40f01e734434a5e7aa9ae01db1" Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.526998 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npmr"] Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.529474 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5npmr"] Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.540840 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc864f55c-rpscs"] Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.541148 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" podUID="75ac241a-633f-43fc-9d47-a05cba7054a1" containerName="controller-manager" containerID="cri-o://030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6" gracePeriod=30 Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.547268 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt"] Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.552111 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" podUID="e0d3261a-aea5-4017-afba-76b8775df70e" containerName="route-controller-manager" containerID="cri-o://ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341" gracePeriod=30 Mar 09 18:28:08 crc kubenswrapper[4821]: I0309 18:28:08.661195 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbkl7"] Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.059883 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.149989 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-config\") pod \"e0d3261a-aea5-4017-afba-76b8775df70e\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.150533 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-client-ca\") pod \"e0d3261a-aea5-4017-afba-76b8775df70e\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.150567 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3261a-aea5-4017-afba-76b8775df70e-serving-cert\") pod \"e0d3261a-aea5-4017-afba-76b8775df70e\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.150615 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgqqd\" (UniqueName: \"kubernetes.io/projected/e0d3261a-aea5-4017-afba-76b8775df70e-kube-api-access-vgqqd\") pod \"e0d3261a-aea5-4017-afba-76b8775df70e\" (UID: \"e0d3261a-aea5-4017-afba-76b8775df70e\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.151897 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0d3261a-aea5-4017-afba-76b8775df70e" (UID: "e0d3261a-aea5-4017-afba-76b8775df70e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.152110 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-config" (OuterVolumeSpecName: "config") pod "e0d3261a-aea5-4017-afba-76b8775df70e" (UID: "e0d3261a-aea5-4017-afba-76b8775df70e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.153435 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.154550 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d3261a-aea5-4017-afba-76b8775df70e-kube-api-access-vgqqd" (OuterVolumeSpecName: "kube-api-access-vgqqd") pod "e0d3261a-aea5-4017-afba-76b8775df70e" (UID: "e0d3261a-aea5-4017-afba-76b8775df70e"). InnerVolumeSpecName "kube-api-access-vgqqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.154832 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d3261a-aea5-4017-afba-76b8775df70e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0d3261a-aea5-4017-afba-76b8775df70e" (UID: "e0d3261a-aea5-4017-afba-76b8775df70e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.251924 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-proxy-ca-bundles\") pod \"75ac241a-633f-43fc-9d47-a05cba7054a1\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252018 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqsl5\" (UniqueName: \"kubernetes.io/projected/75ac241a-633f-43fc-9d47-a05cba7054a1-kube-api-access-kqsl5\") pod \"75ac241a-633f-43fc-9d47-a05cba7054a1\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252048 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac241a-633f-43fc-9d47-a05cba7054a1-serving-cert\") pod \"75ac241a-633f-43fc-9d47-a05cba7054a1\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252118 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-client-ca\") pod \"75ac241a-633f-43fc-9d47-a05cba7054a1\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252142 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-config\") pod \"75ac241a-633f-43fc-9d47-a05cba7054a1\" (UID: \"75ac241a-633f-43fc-9d47-a05cba7054a1\") " Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252452 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgqqd\" (UniqueName: \"kubernetes.io/projected/e0d3261a-aea5-4017-afba-76b8775df70e-kube-api-access-vgqqd\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252470 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252483 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0d3261a-aea5-4017-afba-76b8775df70e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.252495 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0d3261a-aea5-4017-afba-76b8775df70e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.253375 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "75ac241a-633f-43fc-9d47-a05cba7054a1" (UID: "75ac241a-633f-43fc-9d47-a05cba7054a1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.253449 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "75ac241a-633f-43fc-9d47-a05cba7054a1" (UID: "75ac241a-633f-43fc-9d47-a05cba7054a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.253478 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-config" (OuterVolumeSpecName: "config") pod "75ac241a-633f-43fc-9d47-a05cba7054a1" (UID: "75ac241a-633f-43fc-9d47-a05cba7054a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.255177 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ac241a-633f-43fc-9d47-a05cba7054a1-kube-api-access-kqsl5" (OuterVolumeSpecName: "kube-api-access-kqsl5") pod "75ac241a-633f-43fc-9d47-a05cba7054a1" (UID: "75ac241a-633f-43fc-9d47-a05cba7054a1"). InnerVolumeSpecName "kube-api-access-kqsl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.255260 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ac241a-633f-43fc-9d47-a05cba7054a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75ac241a-633f-43fc-9d47-a05cba7054a1" (UID: "75ac241a-633f-43fc-9d47-a05cba7054a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.353736 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqsl5\" (UniqueName: \"kubernetes.io/projected/75ac241a-633f-43fc-9d47-a05cba7054a1-kube-api-access-kqsl5\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.353789 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75ac241a-633f-43fc-9d47-a05cba7054a1-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.353808 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.353827 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.353845 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75ac241a-633f-43fc-9d47-a05cba7054a1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.450338 4821 generic.go:334] "Generic (PLEG): container finished" podID="e0d3261a-aea5-4017-afba-76b8775df70e" containerID="ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341" exitCode=0 Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.450429 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" event={"ID":"e0d3261a-aea5-4017-afba-76b8775df70e","Type":"ContainerDied","Data":"ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341"} Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.450457 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.450463 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt" event={"ID":"e0d3261a-aea5-4017-afba-76b8775df70e","Type":"ContainerDied","Data":"a35154d27d9ba81553158aa518b754ad83f7392acc5ba8938b6d093a4163554d"} Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.450476 4821 scope.go:117] "RemoveContainer" containerID="ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.459697 4821 generic.go:334] "Generic (PLEG): container finished" podID="75ac241a-633f-43fc-9d47-a05cba7054a1" containerID="030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6" exitCode=0 Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.459736 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" event={"ID":"75ac241a-633f-43fc-9d47-a05cba7054a1","Type":"ContainerDied","Data":"030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6"} Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.459784 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" event={"ID":"75ac241a-633f-43fc-9d47-a05cba7054a1","Type":"ContainerDied","Data":"05242cbc617b6f2082618b077a2b88d7cb6e9adbf20a261e48c3cb614abb91ad"} Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.459923 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pbkl7" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="registry-server" containerID="cri-o://32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862" gracePeriod=2 Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.460420 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cc864f55c-rpscs" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.498066 4821 scope.go:117] "RemoveContainer" containerID="ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341" Mar 09 18:28:09 crc kubenswrapper[4821]: E0309 18:28:09.498952 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341\": container with ID starting with ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341 not found: ID does not exist" containerID="ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.499068 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341"} err="failed to get container status \"ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341\": rpc error: code = NotFound desc = could not find container \"ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341\": container with ID starting with ba30d04470aea9b45de9802968039c031deac0fb5bdde31d34518259bef6f341 not found: ID does not exist" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.499182 4821 scope.go:117] "RemoveContainer" containerID="030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.503707 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt"] Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.507451 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b48f564-2qmzt"] Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.558228 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" path="/var/lib/kubelet/pods/1ff9182f-57eb-4efa-b7c3-ae63d66457df/volumes" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.559238 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0d3261a-aea5-4017-afba-76b8775df70e" path="/var/lib/kubelet/pods/e0d3261a-aea5-4017-afba-76b8775df70e/volumes" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.611199 4821 scope.go:117] "RemoveContainer" containerID="030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.611626 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cc864f55c-rpscs"] Mar 09 18:28:09 crc kubenswrapper[4821]: E0309 18:28:09.612802 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6\": container with ID starting with 030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6 not found: ID does not exist" containerID="030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.612832 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6"} err="failed to get container status \"030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6\": rpc error: code = NotFound desc = could not find container \"030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6\": container with ID starting with 030a114e542bb8a3857290453e3697521411b00702397c220c72a698d67371b6 not found: ID does not exist" Mar 09 18:28:09 crc kubenswrapper[4821]: I0309 18:28:09.619926 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cc864f55c-rpscs"] Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.022671 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.067353 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nmzw\" (UniqueName: \"kubernetes.io/projected/faa3533b-267b-44a9-b949-af82368bf7e3-kube-api-access-5nmzw\") pod \"faa3533b-267b-44a9-b949-af82368bf7e3\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.067440 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-utilities\") pod \"faa3533b-267b-44a9-b949-af82368bf7e3\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.067519 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-catalog-content\") pod \"faa3533b-267b-44a9-b949-af82368bf7e3\" (UID: \"faa3533b-267b-44a9-b949-af82368bf7e3\") " Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.069807 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-utilities" (OuterVolumeSpecName: "utilities") pod "faa3533b-267b-44a9-b949-af82368bf7e3" (UID: "faa3533b-267b-44a9-b949-af82368bf7e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.072879 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa3533b-267b-44a9-b949-af82368bf7e3-kube-api-access-5nmzw" (OuterVolumeSpecName: "kube-api-access-5nmzw") pod "faa3533b-267b-44a9-b949-af82368bf7e3" (UID: "faa3533b-267b-44a9-b949-af82368bf7e3"). InnerVolumeSpecName "kube-api-access-5nmzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.169571 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nmzw\" (UniqueName: \"kubernetes.io/projected/faa3533b-267b-44a9-b949-af82368bf7e3-kube-api-access-5nmzw\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.169624 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.206358 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faa3533b-267b-44a9-b949-af82368bf7e3" (UID: "faa3533b-267b-44a9-b949-af82368bf7e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.270930 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faa3533b-267b-44a9-b949-af82368bf7e3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.469804 4821 generic.go:334] "Generic (PLEG): container finished" podID="faa3533b-267b-44a9-b949-af82368bf7e3" containerID="32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862" exitCode=0 Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.469883 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbkl7" event={"ID":"faa3533b-267b-44a9-b949-af82368bf7e3","Type":"ContainerDied","Data":"32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862"} Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.469914 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbkl7" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.469999 4821 scope.go:117] "RemoveContainer" containerID="32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.469976 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbkl7" event={"ID":"faa3533b-267b-44a9-b949-af82368bf7e3","Type":"ContainerDied","Data":"92d04f7e4b7433e9c23bc6aeb71ddb5cd6eb943b840fec0723fce2c0283e0c2b"} Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.487303 4821 scope.go:117] "RemoveContainer" containerID="5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.513316 4821 scope.go:117] "RemoveContainer" containerID="89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.515595 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbkl7"] Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.520853 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pbkl7"] Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.547458 4821 scope.go:117] "RemoveContainer" containerID="32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.547899 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862\": container with ID starting with 32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862 not found: ID does not exist" containerID="32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.547944 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862"} err="failed to get container status \"32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862\": rpc error: code = NotFound desc = could not find container \"32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862\": container with ID starting with 32cb1b42900efd4870d9abf391bd8e8d28e1c91c1a91e62712f76444ec838862 not found: ID does not exist" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.547976 4821 scope.go:117] "RemoveContainer" containerID="5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.548343 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734\": container with ID starting with 5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734 not found: ID does not exist" containerID="5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.548380 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734"} err="failed to get container status \"5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734\": rpc error: code = NotFound desc = could not find container \"5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734\": container with ID starting with 5913195ac00cb13d6463da260020bebfca650951cb89570e5e8ceb3ba5b69734 not found: ID does not exist" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.548404 4821 scope.go:117] "RemoveContainer" containerID="89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.548696 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c\": container with ID starting with 89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c not found: ID does not exist" containerID="89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.548731 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c"} err="failed to get container status \"89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c\": rpc error: code = NotFound desc = could not find container \"89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c\": container with ID starting with 89ebcb9ddebe0666020405817a7a48d3f5a67101f0968fbe880fcd4b2d30f01c not found: ID does not exist" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566457 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68cfd7f947-5qrmt"] Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566657 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="extract-content" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566672 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="extract-content" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566681 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d058c9b7-152c-49b8-9bbb-0681920dd243" containerName="oc" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566687 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d058c9b7-152c-49b8-9bbb-0681920dd243" containerName="oc" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566696 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="extract-content" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566703 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="extract-content" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566711 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="registry-server" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566717 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="registry-server" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566724 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="extract-utilities" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566730 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="extract-utilities" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566739 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="extract-utilities" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566745 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="extract-utilities" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566756 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ac241a-633f-43fc-9d47-a05cba7054a1" containerName="controller-manager" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566762 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ac241a-633f-43fc-9d47-a05cba7054a1" containerName="controller-manager" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566773 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="registry-server" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566778 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="registry-server" Mar 09 18:28:10 crc kubenswrapper[4821]: E0309 18:28:10.566788 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d3261a-aea5-4017-afba-76b8775df70e" containerName="route-controller-manager" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566794 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d3261a-aea5-4017-afba-76b8775df70e" containerName="route-controller-manager" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566870 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" containerName="registry-server" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566880 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff9182f-57eb-4efa-b7c3-ae63d66457df" containerName="registry-server" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566893 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d3261a-aea5-4017-afba-76b8775df70e" containerName="route-controller-manager" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566902 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ac241a-633f-43fc-9d47-a05cba7054a1" containerName="controller-manager" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.566909 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d058c9b7-152c-49b8-9bbb-0681920dd243" containerName="oc" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.567244 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.570207 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.570525 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.570811 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.571079 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.576060 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.577630 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.579085 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl"] Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.580416 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.581135 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.582158 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.587301 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68cfd7f947-5qrmt"] Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.590952 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.590979 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.591239 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.591254 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.591616 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.595887 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl"] Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.675578 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-serving-cert\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.675801 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9108db-43ad-4fd6-8f6a-742c86c78953-serving-cert\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.675888 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76cbp\" (UniqueName: \"kubernetes.io/projected/5e9108db-43ad-4fd6-8f6a-742c86c78953-kube-api-access-76cbp\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.676019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-client-ca\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.676099 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfpz\" (UniqueName: \"kubernetes.io/projected/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-kube-api-access-2zfpz\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.676170 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-config\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.676248 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-client-ca\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.676313 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-proxy-ca-bundles\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.676828 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-config\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778158 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfpz\" (UniqueName: \"kubernetes.io/projected/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-kube-api-access-2zfpz\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778222 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-config\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778256 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-client-ca\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778280 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-proxy-ca-bundles\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778342 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-config\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778368 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-serving-cert\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778401 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9108db-43ad-4fd6-8f6a-742c86c78953-serving-cert\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778434 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76cbp\" (UniqueName: \"kubernetes.io/projected/5e9108db-43ad-4fd6-8f6a-742c86c78953-kube-api-access-76cbp\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.778483 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-client-ca\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.780051 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-client-ca\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.780546 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-proxy-ca-bundles\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.781589 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-client-ca\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.781673 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-config\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.783735 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-config\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.784747 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-serving-cert\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.796028 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9108db-43ad-4fd6-8f6a-742c86c78953-serving-cert\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.809694 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfpz\" (UniqueName: \"kubernetes.io/projected/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-kube-api-access-2zfpz\") pod \"route-controller-manager-6fcfbb5985-fk4fl\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.816982 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76cbp\" (UniqueName: \"kubernetes.io/projected/5e9108db-43ad-4fd6-8f6a-742c86c78953-kube-api-access-76cbp\") pod \"controller-manager-68cfd7f947-5qrmt\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.897386 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:10 crc kubenswrapper[4821]: I0309 18:28:10.917830 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:11 crc kubenswrapper[4821]: I0309 18:28:11.233599 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl"] Mar 09 18:28:11 crc kubenswrapper[4821]: W0309 18:28:11.242180 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9adc8bfa_de1d_4f8f_88bd_4ce739ba24d5.slice/crio-6af5c8d7231989a3d40e15b1eaf94185430ff834899d85fdb119ef70f1f36855 WatchSource:0}: Error finding container 6af5c8d7231989a3d40e15b1eaf94185430ff834899d85fdb119ef70f1f36855: Status 404 returned error can't find the container with id 6af5c8d7231989a3d40e15b1eaf94185430ff834899d85fdb119ef70f1f36855 Mar 09 18:28:11 crc kubenswrapper[4821]: I0309 18:28:11.379686 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68cfd7f947-5qrmt"] Mar 09 18:28:11 crc kubenswrapper[4821]: W0309 18:28:11.384486 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e9108db_43ad_4fd6_8f6a_742c86c78953.slice/crio-27a46c9e74fa173145200cf1f5643cc81dced57e5a2967ba30ac22e24aee5474 WatchSource:0}: Error finding container 27a46c9e74fa173145200cf1f5643cc81dced57e5a2967ba30ac22e24aee5474: Status 404 returned error can't find the container with id 27a46c9e74fa173145200cf1f5643cc81dced57e5a2967ba30ac22e24aee5474 Mar 09 18:28:11 crc kubenswrapper[4821]: I0309 18:28:11.476504 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" event={"ID":"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5","Type":"ContainerStarted","Data":"6af5c8d7231989a3d40e15b1eaf94185430ff834899d85fdb119ef70f1f36855"} Mar 09 18:28:11 crc kubenswrapper[4821]: I0309 18:28:11.477174 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" event={"ID":"5e9108db-43ad-4fd6-8f6a-742c86c78953","Type":"ContainerStarted","Data":"27a46c9e74fa173145200cf1f5643cc81dced57e5a2967ba30ac22e24aee5474"} Mar 09 18:28:11 crc kubenswrapper[4821]: I0309 18:28:11.558949 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ac241a-633f-43fc-9d47-a05cba7054a1" path="/var/lib/kubelet/pods/75ac241a-633f-43fc-9d47-a05cba7054a1/volumes" Mar 09 18:28:11 crc kubenswrapper[4821]: I0309 18:28:11.560579 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa3533b-267b-44a9-b949-af82368bf7e3" path="/var/lib/kubelet/pods/faa3533b-267b-44a9-b949-af82368bf7e3/volumes" Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.486229 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" event={"ID":"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5","Type":"ContainerStarted","Data":"226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73"} Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.486354 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.488574 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" event={"ID":"5e9108db-43ad-4fd6-8f6a-742c86c78953","Type":"ContainerStarted","Data":"48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81"} Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.488929 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.497549 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.498933 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.517125 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" podStartSLOduration=4.517094895 podStartE2EDuration="4.517094895s" podCreationTimestamp="2026-03-09 18:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:28:12.513891794 +0000 UTC m=+229.675267680" watchObservedRunningTime="2026-03-09 18:28:12.517094895 +0000 UTC m=+229.678470791" Mar 09 18:28:12 crc kubenswrapper[4821]: I0309 18:28:12.574439 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" podStartSLOduration=4.574419939 podStartE2EDuration="4.574419939s" podCreationTimestamp="2026-03-09 18:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:28:12.572838139 +0000 UTC m=+229.734214005" watchObservedRunningTime="2026-03-09 18:28:12.574419939 +0000 UTC m=+229.735795805" Mar 09 18:28:13 crc kubenswrapper[4821]: I0309 18:28:13.168657 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:28:13 crc kubenswrapper[4821]: I0309 18:28:13.374590 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:28:13 crc kubenswrapper[4821]: I0309 18:28:13.431460 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:28:19 crc kubenswrapper[4821]: I0309 18:28:19.414990 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" podUID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" containerName="oauth-openshift" containerID="cri-o://291e4f121113c06d808f6b70538aeee8dace0be9fcb3d3239d63d17c6de044bd" gracePeriod=15 Mar 09 18:28:19 crc kubenswrapper[4821]: I0309 18:28:19.543829 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" containerID="291e4f121113c06d808f6b70538aeee8dace0be9fcb3d3239d63d17c6de044bd" exitCode=0 Mar 09 18:28:19 crc kubenswrapper[4821]: I0309 18:28:19.543953 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" event={"ID":"6d35d28f-2377-46c5-95aa-ea3bf280a60e","Type":"ContainerDied","Data":"291e4f121113c06d808f6b70538aeee8dace0be9fcb3d3239d63d17c6de044bd"} Mar 09 18:28:19 crc kubenswrapper[4821]: I0309 18:28:19.905135 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015342 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-dir\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015407 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-login\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015428 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-session\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015460 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-provider-selection\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015482 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-idp-0-file-data\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015509 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-trusted-ca-bundle\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015532 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-error\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015552 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf9bk\" (UniqueName: \"kubernetes.io/projected/6d35d28f-2377-46c5-95aa-ea3bf280a60e-kube-api-access-bf9bk\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015574 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-cliconfig\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015592 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-router-certs\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015616 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-ocp-branding-template\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015637 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-service-ca\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015654 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-serving-cert\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.015668 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-policies\") pod \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\" (UID: \"6d35d28f-2377-46c5-95aa-ea3bf280a60e\") " Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.016342 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.016563 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.016691 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.017257 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.017922 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.022966 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.023406 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d35d28f-2377-46c5-95aa-ea3bf280a60e-kube-api-access-bf9bk" (OuterVolumeSpecName: "kube-api-access-bf9bk") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "kube-api-access-bf9bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.029568 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.029743 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.030062 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.030283 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.030350 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.030619 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.030639 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6d35d28f-2377-46c5-95aa-ea3bf280a60e" (UID: "6d35d28f-2377-46c5-95aa-ea3bf280a60e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117444 4821 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117500 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117521 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117542 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117561 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117579 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117597 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf9bk\" (UniqueName: \"kubernetes.io/projected/6d35d28f-2377-46c5-95aa-ea3bf280a60e-kube-api-access-bf9bk\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117616 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117634 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117653 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117671 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117689 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117709 4821 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d35d28f-2377-46c5-95aa-ea3bf280a60e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.117726 4821 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d35d28f-2377-46c5-95aa-ea3bf280a60e-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.554471 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" event={"ID":"6d35d28f-2377-46c5-95aa-ea3bf280a60e","Type":"ContainerDied","Data":"cc0b86a17d9eb4cbec1612a3f2d68cfbd5e69830624f5f2e5e367bd46d6a2722"} Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.554979 4821 scope.go:117] "RemoveContainer" containerID="291e4f121113c06d808f6b70538aeee8dace0be9fcb3d3239d63d17c6de044bd" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.554511 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tnl4x" Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.594392 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tnl4x"] Mar 09 18:28:20 crc kubenswrapper[4821]: I0309 18:28:20.598037 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tnl4x"] Mar 09 18:28:21 crc kubenswrapper[4821]: I0309 18:28:21.561629 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" path="/var/lib/kubelet/pods/6d35d28f-2377-46c5-95aa-ea3bf280a60e/volumes" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.564977 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-68945756f9-mpsgc"] Mar 09 18:28:23 crc kubenswrapper[4821]: E0309 18:28:23.565511 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" containerName="oauth-openshift" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.565523 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" containerName="oauth-openshift" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.565611 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d35d28f-2377-46c5-95aa-ea3bf280a60e" containerName="oauth-openshift" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.565922 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.570680 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.570708 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.570852 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.570712 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.570861 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571051 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571129 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571388 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571454 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571486 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571503 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.571584 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.582265 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.585377 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68945756f9-mpsgc"] Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.591647 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.596884 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.673925 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-session\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.673977 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674003 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674077 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-login\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674202 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674249 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-error\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674272 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-service-ca\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674301 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-audit-policies\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674357 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674386 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674411 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674442 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-router-certs\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674467 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v97z9\" (UniqueName: \"kubernetes.io/projected/d3ee9323-1645-46d9-a4e7-e721976401e0-kube-api-access-v97z9\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.674498 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3ee9323-1645-46d9-a4e7-e721976401e0-audit-dir\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775628 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775651 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-login\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775680 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775713 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-error\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775736 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-service-ca\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775767 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-audit-policies\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775805 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775834 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775855 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775888 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-router-certs\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775910 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v97z9\" (UniqueName: \"kubernetes.io/projected/d3ee9323-1645-46d9-a4e7-e721976401e0-kube-api-access-v97z9\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775937 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3ee9323-1645-46d9-a4e7-e721976401e0-audit-dir\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.775981 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-session\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.776748 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3ee9323-1645-46d9-a4e7-e721976401e0-audit-dir\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.777488 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-service-ca\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.777670 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.777691 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-audit-policies\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.778090 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.780939 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.780946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.781682 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-router-certs\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.782036 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-error\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.782310 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-session\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.784048 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.790906 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-template-login\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.791949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3ee9323-1645-46d9-a4e7-e721976401e0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.796177 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v97z9\" (UniqueName: \"kubernetes.io/projected/d3ee9323-1645-46d9-a4e7-e721976401e0-kube-api-access-v97z9\") pod \"oauth-openshift-68945756f9-mpsgc\" (UID: \"d3ee9323-1645-46d9-a4e7-e721976401e0\") " pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:23 crc kubenswrapper[4821]: I0309 18:28:23.890220 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:24 crc kubenswrapper[4821]: I0309 18:28:24.290138 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-68945756f9-mpsgc"] Mar 09 18:28:24 crc kubenswrapper[4821]: I0309 18:28:24.581440 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" event={"ID":"d3ee9323-1645-46d9-a4e7-e721976401e0","Type":"ContainerStarted","Data":"5f11e7db675bfd9d041fa1ac402ba7897d7bde584e6127a163e52ff8f9dc0243"} Mar 09 18:28:25 crc kubenswrapper[4821]: I0309 18:28:25.590140 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" event={"ID":"d3ee9323-1645-46d9-a4e7-e721976401e0","Type":"ContainerStarted","Data":"b990348effb10718e01709ee4433eec11ae246794ea2d1faa08422a680256a14"} Mar 09 18:28:25 crc kubenswrapper[4821]: I0309 18:28:25.590704 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:25 crc kubenswrapper[4821]: I0309 18:28:25.598456 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" Mar 09 18:28:25 crc kubenswrapper[4821]: I0309 18:28:25.614916 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-68945756f9-mpsgc" podStartSLOduration=31.614884596 podStartE2EDuration="31.614884596s" podCreationTimestamp="2026-03-09 18:27:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:28:25.611246004 +0000 UTC m=+242.772621860" watchObservedRunningTime="2026-03-09 18:28:25.614884596 +0000 UTC m=+242.776260492" Mar 09 18:28:28 crc kubenswrapper[4821]: I0309 18:28:28.498811 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68cfd7f947-5qrmt"] Mar 09 18:28:28 crc kubenswrapper[4821]: I0309 18:28:28.499138 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" podUID="5e9108db-43ad-4fd6-8f6a-742c86c78953" containerName="controller-manager" containerID="cri-o://48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81" gracePeriod=30 Mar 09 18:28:28 crc kubenswrapper[4821]: I0309 18:28:28.589072 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl"] Mar 09 18:28:28 crc kubenswrapper[4821]: I0309 18:28:28.589418 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" podUID="9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" containerName="route-controller-manager" containerID="cri-o://226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73" gracePeriod=30 Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.105580 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.109588 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.251981 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-serving-cert\") pod \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253114 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-config\") pod \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253183 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9108db-43ad-4fd6-8f6a-742c86c78953-serving-cert\") pod \"5e9108db-43ad-4fd6-8f6a-742c86c78953\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253206 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-client-ca\") pod \"5e9108db-43ad-4fd6-8f6a-742c86c78953\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253234 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76cbp\" (UniqueName: \"kubernetes.io/projected/5e9108db-43ad-4fd6-8f6a-742c86c78953-kube-api-access-76cbp\") pod \"5e9108db-43ad-4fd6-8f6a-742c86c78953\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253267 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-config\") pod \"5e9108db-43ad-4fd6-8f6a-742c86c78953\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253293 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-client-ca\") pod \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253359 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zfpz\" (UniqueName: \"kubernetes.io/projected/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-kube-api-access-2zfpz\") pod \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\" (UID: \"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.253436 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-proxy-ca-bundles\") pod \"5e9108db-43ad-4fd6-8f6a-742c86c78953\" (UID: \"5e9108db-43ad-4fd6-8f6a-742c86c78953\") " Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.254046 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-client-ca" (OuterVolumeSpecName: "client-ca") pod "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" (UID: "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.254065 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-config" (OuterVolumeSpecName: "config") pod "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" (UID: "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.254217 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5e9108db-43ad-4fd6-8f6a-742c86c78953" (UID: "5e9108db-43ad-4fd6-8f6a-742c86c78953"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.254207 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e9108db-43ad-4fd6-8f6a-742c86c78953" (UID: "5e9108db-43ad-4fd6-8f6a-742c86c78953"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.254307 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-config" (OuterVolumeSpecName: "config") pod "5e9108db-43ad-4fd6-8f6a-742c86c78953" (UID: "5e9108db-43ad-4fd6-8f6a-742c86c78953"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.258065 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" (UID: "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.258073 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-kube-api-access-2zfpz" (OuterVolumeSpecName: "kube-api-access-2zfpz") pod "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" (UID: "9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5"). InnerVolumeSpecName "kube-api-access-2zfpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.258125 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9108db-43ad-4fd6-8f6a-742c86c78953-kube-api-access-76cbp" (OuterVolumeSpecName: "kube-api-access-76cbp") pod "5e9108db-43ad-4fd6-8f6a-742c86c78953" (UID: "5e9108db-43ad-4fd6-8f6a-742c86c78953"). InnerVolumeSpecName "kube-api-access-76cbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.259435 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9108db-43ad-4fd6-8f6a-742c86c78953-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e9108db-43ad-4fd6-8f6a-742c86c78953" (UID: "5e9108db-43ad-4fd6-8f6a-742c86c78953"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354718 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zfpz\" (UniqueName: \"kubernetes.io/projected/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-kube-api-access-2zfpz\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354758 4821 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354768 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354777 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354788 4821 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e9108db-43ad-4fd6-8f6a-742c86c78953-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354795 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354806 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76cbp\" (UniqueName: \"kubernetes.io/projected/5e9108db-43ad-4fd6-8f6a-742c86c78953-kube-api-access-76cbp\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354814 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e9108db-43ad-4fd6-8f6a-742c86c78953-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.354821 4821 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.573328 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-548b5494f8-8sdc6"] Mar 09 18:28:29 crc kubenswrapper[4821]: E0309 18:28:29.573552 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" containerName="route-controller-manager" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.573564 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" containerName="route-controller-manager" Mar 09 18:28:29 crc kubenswrapper[4821]: E0309 18:28:29.573578 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9108db-43ad-4fd6-8f6a-742c86c78953" containerName="controller-manager" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.573585 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9108db-43ad-4fd6-8f6a-742c86c78953" containerName="controller-manager" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.573706 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9108db-43ad-4fd6-8f6a-742c86c78953" containerName="controller-manager" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.573714 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" containerName="route-controller-manager" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.574063 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.583189 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-548b5494f8-8sdc6"] Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.623005 4821 generic.go:334] "Generic (PLEG): container finished" podID="5e9108db-43ad-4fd6-8f6a-742c86c78953" containerID="48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81" exitCode=0 Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.623413 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" event={"ID":"5e9108db-43ad-4fd6-8f6a-742c86c78953","Type":"ContainerDied","Data":"48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81"} Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.623446 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" event={"ID":"5e9108db-43ad-4fd6-8f6a-742c86c78953","Type":"ContainerDied","Data":"27a46c9e74fa173145200cf1f5643cc81dced57e5a2967ba30ac22e24aee5474"} Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.623466 4821 scope.go:117] "RemoveContainer" containerID="48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.623624 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68cfd7f947-5qrmt" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.628477 4821 generic.go:334] "Generic (PLEG): container finished" podID="9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" containerID="226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73" exitCode=0 Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.628522 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" event={"ID":"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5","Type":"ContainerDied","Data":"226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73"} Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.628577 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" event={"ID":"9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5","Type":"ContainerDied","Data":"6af5c8d7231989a3d40e15b1eaf94185430ff834899d85fdb119ef70f1f36855"} Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.628600 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.647739 4821 scope.go:117] "RemoveContainer" containerID="48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81" Mar 09 18:28:29 crc kubenswrapper[4821]: E0309 18:28:29.648181 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81\": container with ID starting with 48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81 not found: ID does not exist" containerID="48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.648363 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81"} err="failed to get container status \"48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81\": rpc error: code = NotFound desc = could not find container \"48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81\": container with ID starting with 48180fe13067d820732f9ffb17e1f75a13e05df5208f3c2ac38f02c2365e8e81 not found: ID does not exist" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.648471 4821 scope.go:117] "RemoveContainer" containerID="226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.659023 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/605e5144-5967-4913-a269-a024956c1911-serving-cert\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.659471 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-proxy-ca-bundles\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.659613 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-client-ca\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.659730 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-config\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.659845 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfvq\" (UniqueName: \"kubernetes.io/projected/605e5144-5967-4913-a269-a024956c1911-kube-api-access-bjfvq\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.659926 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl"] Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.663544 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcfbb5985-fk4fl"] Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.675728 4821 scope.go:117] "RemoveContainer" containerID="226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73" Mar 09 18:28:29 crc kubenswrapper[4821]: E0309 18:28:29.676766 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73\": container with ID starting with 226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73 not found: ID does not exist" containerID="226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.676816 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73"} err="failed to get container status \"226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73\": rpc error: code = NotFound desc = could not find container \"226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73\": container with ID starting with 226137fc778a06277bd011350b9697dcecff5ab056d07bc92ae1986bbbf29f73 not found: ID does not exist" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.681372 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68cfd7f947-5qrmt"] Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.687877 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68cfd7f947-5qrmt"] Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.761400 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/605e5144-5967-4913-a269-a024956c1911-serving-cert\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.761459 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-proxy-ca-bundles\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.761485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-client-ca\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.761518 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-config\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.761558 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjfvq\" (UniqueName: \"kubernetes.io/projected/605e5144-5967-4913-a269-a024956c1911-kube-api-access-bjfvq\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.762628 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-client-ca\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.762886 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-config\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.763033 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/605e5144-5967-4913-a269-a024956c1911-proxy-ca-bundles\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.767696 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/605e5144-5967-4913-a269-a024956c1911-serving-cert\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.780766 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjfvq\" (UniqueName: \"kubernetes.io/projected/605e5144-5967-4913-a269-a024956c1911-kube-api-access-bjfvq\") pod \"controller-manager-548b5494f8-8sdc6\" (UID: \"605e5144-5967-4913-a269-a024956c1911\") " pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:29 crc kubenswrapper[4821]: I0309 18:28:29.900661 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.368001 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-548b5494f8-8sdc6"] Mar 09 18:28:30 crc kubenswrapper[4821]: W0309 18:28:30.375702 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605e5144_5967_4913_a269_a024956c1911.slice/crio-488dff73a92a9d5ab32fe9cb4bc0f4ca4e00509d9e48c0eab4d8688c53727fff WatchSource:0}: Error finding container 488dff73a92a9d5ab32fe9cb4bc0f4ca4e00509d9e48c0eab4d8688c53727fff: Status 404 returned error can't find the container with id 488dff73a92a9d5ab32fe9cb4bc0f4ca4e00509d9e48c0eab4d8688c53727fff Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.570634 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-777997c889-676zh"] Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.571408 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.576752 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.576988 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.577339 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.577441 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.577545 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.577641 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.592716 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-777997c889-676zh"] Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.637780 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" event={"ID":"605e5144-5967-4913-a269-a024956c1911","Type":"ContainerStarted","Data":"29d6b5111e825e82e39f9da9184b78ae0333249b11443e702a5d378edc29cb87"} Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.637821 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" event={"ID":"605e5144-5967-4913-a269-a024956c1911","Type":"ContainerStarted","Data":"488dff73a92a9d5ab32fe9cb4bc0f4ca4e00509d9e48c0eab4d8688c53727fff"} Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.638065 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.655221 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.672852 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-client-ca\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.672909 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbjdb\" (UniqueName: \"kubernetes.io/projected/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-kube-api-access-vbjdb\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.672957 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-serving-cert\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.673086 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-config\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.685335 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-548b5494f8-8sdc6" podStartSLOduration=2.6852957330000002 podStartE2EDuration="2.685295733s" podCreationTimestamp="2026-03-09 18:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:28:30.660486174 +0000 UTC m=+247.821862050" watchObservedRunningTime="2026-03-09 18:28:30.685295733 +0000 UTC m=+247.846671599" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.774444 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbjdb\" (UniqueName: \"kubernetes.io/projected/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-kube-api-access-vbjdb\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.774497 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-serving-cert\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.774557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-config\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.774610 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-client-ca\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.775784 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-client-ca\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.777759 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-config\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.782136 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-serving-cert\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.792844 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbjdb\" (UniqueName: \"kubernetes.io/projected/cc8e8bc1-27d6-41ee-a3ad-5f54da201888-kube-api-access-vbjdb\") pod \"route-controller-manager-777997c889-676zh\" (UID: \"cc8e8bc1-27d6-41ee-a3ad-5f54da201888\") " pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:30 crc kubenswrapper[4821]: I0309 18:28:30.890433 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.345524 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-777997c889-676zh"] Mar 09 18:28:31 crc kubenswrapper[4821]: W0309 18:28:31.355912 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc8e8bc1_27d6_41ee_a3ad_5f54da201888.slice/crio-5fea9c62dd5461bc88bb701a6b060f974bdc2145277e56b33ba30ea3daf86200 WatchSource:0}: Error finding container 5fea9c62dd5461bc88bb701a6b060f974bdc2145277e56b33ba30ea3daf86200: Status 404 returned error can't find the container with id 5fea9c62dd5461bc88bb701a6b060f974bdc2145277e56b33ba30ea3daf86200 Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.564122 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9108db-43ad-4fd6-8f6a-742c86c78953" path="/var/lib/kubelet/pods/5e9108db-43ad-4fd6-8f6a-742c86c78953/volumes" Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.565776 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5" path="/var/lib/kubelet/pods/9adc8bfa-de1d-4f8f-88bd-4ce739ba24d5/volumes" Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.648140 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" event={"ID":"cc8e8bc1-27d6-41ee-a3ad-5f54da201888","Type":"ContainerStarted","Data":"a0f291852c17a53e42d1dbde1a6df28de0fc4dddc9dac331024e26d236ec1557"} Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.648526 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.648567 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" event={"ID":"cc8e8bc1-27d6-41ee-a3ad-5f54da201888","Type":"ContainerStarted","Data":"5fea9c62dd5461bc88bb701a6b060f974bdc2145277e56b33ba30ea3daf86200"} Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.679928 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 09 18:28:31 crc kubenswrapper[4821]: I0309 18:28:31.709251 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" podStartSLOduration=3.709222656 podStartE2EDuration="3.709222656s" podCreationTimestamp="2026-03-09 18:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:28:31.665523309 +0000 UTC m=+248.826899225" watchObservedRunningTime="2026-03-09 18:28:31.709222656 +0000 UTC m=+248.870598512" Mar 09 18:28:32 crc kubenswrapper[4821]: I0309 18:28:32.042741 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-777997c889-676zh" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.585633 4821 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.586394 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587" gracePeriod=15 Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.586471 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3" gracePeriod=15 Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.586452 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76" gracePeriod=15 Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.586614 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77" gracePeriod=15 Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.586614 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56" gracePeriod=15 Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590034 4821 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590450 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590492 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590513 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590529 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590551 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590568 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590590 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590608 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590631 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590660 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590680 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590696 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590719 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590734 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590750 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590766 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.590802 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590818 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.590988 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591013 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591027 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591043 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591061 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591077 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591100 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591116 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.591281 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591296 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.591502 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.595366 4821 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.596667 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.600260 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.655824 4821 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764160 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764226 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764305 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764412 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764435 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764473 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764494 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.764519 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865612 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865662 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865706 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865747 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865717 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865783 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865808 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865871 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865892 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865922 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.865941 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.866003 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.866026 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.866077 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.866106 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.866082 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: I0309 18:28:36.957925 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:36 crc kubenswrapper[4821]: W0309 18:28:36.977273 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-672efbd68e388184077d17d4ad2009a47439bbcd0b46ef61c72cb3b2bfb555a0 WatchSource:0}: Error finding container 672efbd68e388184077d17d4ad2009a47439bbcd0b46ef61c72cb3b2bfb555a0: Status 404 returned error can't find the container with id 672efbd68e388184077d17d4ad2009a47439bbcd0b46ef61c72cb3b2bfb555a0 Mar 09 18:28:36 crc kubenswrapper[4821]: E0309 18:28:36.985110 4821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189b3fb326da6800 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:28:36.983965696 +0000 UTC m=+254.145341552,LastTimestamp:2026-03-09 18:28:36.983965696 +0000 UTC m=+254.145341552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.684551 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7c81ea312ce1a396b6aabfd8967fc75f7cf75fba41b32cbc232d3bae8c28df51"} Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.684932 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"672efbd68e388184077d17d4ad2009a47439bbcd0b46ef61c72cb3b2bfb555a0"} Mar 09 18:28:37 crc kubenswrapper[4821]: E0309 18:28:37.686840 4821 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.689458 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.691009 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.692434 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76" exitCode=0 Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.692474 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56" exitCode=0 Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.692492 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3" exitCode=0 Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.692509 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77" exitCode=2 Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.692635 4821 scope.go:117] "RemoveContainer" containerID="119281769a59b2cc00fda33e4dc2691f689666193e3877539e192a51c6b3a7d5" Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.695711 4821 generic.go:334] "Generic (PLEG): container finished" podID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" containerID="81af810d180058fde4f30bd8a77b3749ecea989c43f690bf25e1b25dc74b8eee" exitCode=0 Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.695852 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e","Type":"ContainerDied","Data":"81af810d180058fde4f30bd8a77b3749ecea989c43f690bf25e1b25dc74b8eee"} Mar 09 18:28:37 crc kubenswrapper[4821]: I0309 18:28:37.697678 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:38 crc kubenswrapper[4821]: I0309 18:28:38.725420 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 09 18:28:38 crc kubenswrapper[4821]: I0309 18:28:38.956309 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 09 18:28:38 crc kubenswrapper[4821]: I0309 18:28:38.957815 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:38 crc kubenswrapper[4821]: I0309 18:28:38.958390 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:38 crc kubenswrapper[4821]: I0309 18:28:38.958777 4821 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100060 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100121 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100155 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100149 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100270 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100177 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100558 4821 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100579 4821 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.100590 4821 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.136386 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.137210 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.137964 4821 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.303359 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kube-api-access\") pod \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.303505 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kubelet-dir\") pod \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.303607 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" (UID: "dcdc187f-6e3b-442c-80a1-e404ee5ebb9e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.303649 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-var-lock\") pod \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\" (UID: \"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e\") " Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.303768 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-var-lock" (OuterVolumeSpecName: "var-lock") pod "dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" (UID: "dcdc187f-6e3b-442c-80a1-e404ee5ebb9e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.304130 4821 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-var-lock\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.304163 4821 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.312393 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" (UID: "dcdc187f-6e3b-442c-80a1-e404ee5ebb9e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.405869 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcdc187f-6e3b-442c-80a1-e404ee5ebb9e-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.564368 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.736242 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"dcdc187f-6e3b-442c-80a1-e404ee5ebb9e","Type":"ContainerDied","Data":"29e0ea09e04388cefcffb1b33bc8f9aa36c53c591aa7d96c8b21aa75d8930a2c"} Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.736288 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e0ea09e04388cefcffb1b33bc8f9aa36c53c591aa7d96c8b21aa75d8930a2c" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.736308 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.741577 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.741606 4821 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587" exitCode=0 Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.741660 4821 scope.go:117] "RemoveContainer" containerID="230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.741714 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.742347 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.742639 4821 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.744665 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.745040 4821 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.757280 4821 scope.go:117] "RemoveContainer" containerID="082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.777571 4821 scope.go:117] "RemoveContainer" containerID="f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.793976 4821 scope.go:117] "RemoveContainer" containerID="3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.807217 4821 scope.go:117] "RemoveContainer" containerID="fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.827477 4821 scope.go:117] "RemoveContainer" containerID="349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.851567 4821 scope.go:117] "RemoveContainer" containerID="230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76" Mar 09 18:28:39 crc kubenswrapper[4821]: E0309 18:28:39.852193 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\": container with ID starting with 230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76 not found: ID does not exist" containerID="230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.852247 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76"} err="failed to get container status \"230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\": rpc error: code = NotFound desc = could not find container \"230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76\": container with ID starting with 230962e0998443268c5544ff6cd84de9737ca5052b545fefaf936a65035eac76 not found: ID does not exist" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.852276 4821 scope.go:117] "RemoveContainer" containerID="082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56" Mar 09 18:28:39 crc kubenswrapper[4821]: E0309 18:28:39.852643 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\": container with ID starting with 082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56 not found: ID does not exist" containerID="082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.852669 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56"} err="failed to get container status \"082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\": rpc error: code = NotFound desc = could not find container \"082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56\": container with ID starting with 082ae49d6351c4b7c131f2acf1fa03bb027fb2936d8329b47161296deb0f2f56 not found: ID does not exist" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.852685 4821 scope.go:117] "RemoveContainer" containerID="f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3" Mar 09 18:28:39 crc kubenswrapper[4821]: E0309 18:28:39.853044 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\": container with ID starting with f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3 not found: ID does not exist" containerID="f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.853075 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3"} err="failed to get container status \"f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\": rpc error: code = NotFound desc = could not find container \"f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3\": container with ID starting with f00430217365e7e6e833c0433c6ba93d95ba4524be65694970a2c02d160cf9d3 not found: ID does not exist" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.853097 4821 scope.go:117] "RemoveContainer" containerID="3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77" Mar 09 18:28:39 crc kubenswrapper[4821]: E0309 18:28:39.853526 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\": container with ID starting with 3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77 not found: ID does not exist" containerID="3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.853550 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77"} err="failed to get container status \"3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\": rpc error: code = NotFound desc = could not find container \"3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77\": container with ID starting with 3271b422586b682a9295e96cfb28eef2080d9a24afd76d64b7570168fb132f77 not found: ID does not exist" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.853567 4821 scope.go:117] "RemoveContainer" containerID="fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587" Mar 09 18:28:39 crc kubenswrapper[4821]: E0309 18:28:39.853842 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\": container with ID starting with fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587 not found: ID does not exist" containerID="fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.853862 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587"} err="failed to get container status \"fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\": rpc error: code = NotFound desc = could not find container \"fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587\": container with ID starting with fa324a823cad402644c926e01ada71fad9e4973e1785939d56f8116dd51a6587 not found: ID does not exist" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.853879 4821 scope.go:117] "RemoveContainer" containerID="349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524" Mar 09 18:28:39 crc kubenswrapper[4821]: E0309 18:28:39.854814 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\": container with ID starting with 349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524 not found: ID does not exist" containerID="349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524" Mar 09 18:28:39 crc kubenswrapper[4821]: I0309 18:28:39.854862 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524"} err="failed to get container status \"349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\": rpc error: code = NotFound desc = could not find container \"349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524\": container with ID starting with 349109f198f86de01260640f303b12477f8ffefde1a13fa65a1fa11860bac524 not found: ID does not exist" Mar 09 18:28:42 crc kubenswrapper[4821]: E0309 18:28:42.901392 4821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189b3fb326da6800 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-09 18:28:36.983965696 +0000 UTC m=+254.145341552,LastTimestamp:2026-03-09 18:28:36.983965696 +0000 UTC m=+254.145341552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 09 18:28:43 crc kubenswrapper[4821]: I0309 18:28:43.556146 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.376088 4821 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.377863 4821 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.378198 4821 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.378726 4821 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.379455 4821 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:44 crc kubenswrapper[4821]: I0309 18:28:44.379938 4821 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.380669 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="200ms" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.581572 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="400ms" Mar 09 18:28:44 crc kubenswrapper[4821]: E0309 18:28:44.983124 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="800ms" Mar 09 18:28:45 crc kubenswrapper[4821]: E0309 18:28:45.784912 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="1.6s" Mar 09 18:28:47 crc kubenswrapper[4821]: E0309 18:28:47.386222 4821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="3.2s" Mar 09 18:28:48 crc kubenswrapper[4821]: I0309 18:28:48.551537 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:48 crc kubenswrapper[4821]: I0309 18:28:48.552587 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:48 crc kubenswrapper[4821]: I0309 18:28:48.577823 4821 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:48 crc kubenswrapper[4821]: I0309 18:28:48.578041 4821 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:48 crc kubenswrapper[4821]: E0309 18:28:48.578582 4821 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:48 crc kubenswrapper[4821]: I0309 18:28:48.579313 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:48 crc kubenswrapper[4821]: I0309 18:28:48.802727 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a9bad588748c60e3ba730e9ec7242956f8aa65cecdbed175138d3566d2696538"} Mar 09 18:28:49 crc kubenswrapper[4821]: I0309 18:28:49.814111 4821 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="ee1104c47f5da3c9534d3dff6c0b1ab4fb7ecde06eacfbd123c601d3c5d55e5b" exitCode=0 Mar 09 18:28:49 crc kubenswrapper[4821]: I0309 18:28:49.814659 4821 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:49 crc kubenswrapper[4821]: I0309 18:28:49.814720 4821 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:49 crc kubenswrapper[4821]: I0309 18:28:49.814670 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"ee1104c47f5da3c9534d3dff6c0b1ab4fb7ecde06eacfbd123c601d3c5d55e5b"} Mar 09 18:28:49 crc kubenswrapper[4821]: I0309 18:28:49.815397 4821 status_manager.go:851] "Failed to get status for pod" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Mar 09 18:28:49 crc kubenswrapper[4821]: E0309 18:28:49.815421 4821 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.826658 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.832274 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.832373 4821 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c" exitCode=1 Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.832505 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c"} Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.833125 4821 scope.go:117] "RemoveContainer" containerID="36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c" Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.846080 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64ddfac86fafe3499ea7bec9fe0512edd87ee1ec810f648e6ff3f8c5b7edf7ad"} Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.846129 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"354cb7a7206993996535019325983e3a0c8ae73aaaab70d181b4208d2f929c4f"} Mar 09 18:28:50 crc kubenswrapper[4821]: I0309 18:28:50.846143 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bfd0dd95014954113eebd63d0af15e599367d91912bf2c58d4db9db23cef810a"} Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.853147 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.853989 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.854061 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"23464a2c40cf9e64f2e6eab0c203cc62685ab91ab392a0eda388e7658dcb4f24"} Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.857032 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9949c59088d2204708a2c3a44aa73ac700dccf2d09d5cc18d471dc5024c90304"} Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.857062 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"995758f890a47103bf1b52f4b0cd04772d79ca1fbdbddc49987c28c1db4b20c9"} Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.857228 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.857335 4821 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:51 crc kubenswrapper[4821]: I0309 18:28:51.857364 4821 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:53 crc kubenswrapper[4821]: I0309 18:28:53.430298 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:28:53 crc kubenswrapper[4821]: I0309 18:28:53.430512 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 09 18:28:53 crc kubenswrapper[4821]: I0309 18:28:53.430590 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 09 18:28:53 crc kubenswrapper[4821]: I0309 18:28:53.579852 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:53 crc kubenswrapper[4821]: I0309 18:28:53.579936 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:53 crc kubenswrapper[4821]: I0309 18:28:53.588619 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:56 crc kubenswrapper[4821]: I0309 18:28:56.868785 4821 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:56 crc kubenswrapper[4821]: I0309 18:28:56.893161 4821 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:56 crc kubenswrapper[4821]: I0309 18:28:56.893196 4821 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:56 crc kubenswrapper[4821]: I0309 18:28:56.900292 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:28:56 crc kubenswrapper[4821]: I0309 18:28:56.902514 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ce7989dd-312d-4d60-9298-b5d486812466" Mar 09 18:28:57 crc kubenswrapper[4821]: I0309 18:28:57.901029 4821 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:57 crc kubenswrapper[4821]: I0309 18:28:57.901087 4821 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55fc2290-6300-4f7d-98d7-8abdde521a83" Mar 09 18:28:59 crc kubenswrapper[4821]: I0309 18:28:59.917971 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:28:59 crc kubenswrapper[4821]: I0309 18:28:59.918498 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:28:59 crc kubenswrapper[4821]: I0309 18:28:59.982306 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:29:03 crc kubenswrapper[4821]: I0309 18:29:03.431857 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 09 18:29:03 crc kubenswrapper[4821]: I0309 18:29:03.432256 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 09 18:29:03 crc kubenswrapper[4821]: I0309 18:29:03.577512 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ce7989dd-312d-4d60-9298-b5d486812466" Mar 09 18:29:06 crc kubenswrapper[4821]: I0309 18:29:06.438961 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 09 18:29:06 crc kubenswrapper[4821]: I0309 18:29:06.539066 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 09 18:29:08 crc kubenswrapper[4821]: I0309 18:29:08.638760 4821 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 09 18:29:08 crc kubenswrapper[4821]: I0309 18:29:08.650967 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 09 18:29:08 crc kubenswrapper[4821]: I0309 18:29:08.730718 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 09 18:29:08 crc kubenswrapper[4821]: I0309 18:29:08.764093 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 09 18:29:08 crc kubenswrapper[4821]: I0309 18:29:08.909219 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.176374 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.338275 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.457497 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.461079 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.578395 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.967905 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 09 18:29:09 crc kubenswrapper[4821]: I0309 18:29:09.986644 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.066449 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.213981 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.216898 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.224786 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.293087 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.352844 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.441973 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.523987 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.596470 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.629990 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.640611 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.694039 4821 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.785826 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.826157 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.851864 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 09 18:29:10 crc kubenswrapper[4821]: I0309 18:29:10.927275 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.088025 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.115428 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.172100 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.176761 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.199501 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.296983 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.367261 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.431873 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.648844 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.663181 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.666000 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.684056 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.724993 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.814037 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.854622 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.904474 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.951985 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 18:29:11 crc kubenswrapper[4821]: I0309 18:29:11.964525 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.036042 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.070249 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.158182 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.342001 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.367363 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.404941 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.480192 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.665843 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.870433 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.871468 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.884050 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.918099 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 09 18:29:12 crc kubenswrapper[4821]: I0309 18:29:12.924289 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.027131 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.107998 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.115804 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.128950 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.131954 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.168695 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.193715 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.233858 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.331462 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.343796 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.418981 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.430468 4821 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.430755 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.430999 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.431844 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"23464a2c40cf9e64f2e6eab0c203cc62685ab91ab392a0eda388e7658dcb4f24"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.432011 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://23464a2c40cf9e64f2e6eab0c203cc62685ab91ab392a0eda388e7658dcb4f24" gracePeriod=30 Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.492642 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.500182 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.567979 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.622083 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.657884 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.665222 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.691084 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.699534 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.713100 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.716896 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.729937 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.757274 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.801477 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.824536 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.916414 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 09 18:29:13 crc kubenswrapper[4821]: I0309 18:29:13.983504 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.009781 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.011125 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.011306 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.059226 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.134734 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.158013 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.163359 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.219481 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.277650 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.299795 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.315858 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.353553 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.439566 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.463521 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.502732 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.559470 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.561511 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.612851 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.738807 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.826013 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.831540 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.892996 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 09 18:29:14 crc kubenswrapper[4821]: I0309 18:29:14.976065 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.069385 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.117247 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.143823 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.150962 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.173934 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.180002 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.291739 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.359849 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.441529 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.482586 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.523932 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.540871 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.696145 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.778987 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.824991 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.832668 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.915790 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 09 18:29:15 crc kubenswrapper[4821]: I0309 18:29:15.960596 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.080628 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.119842 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.209526 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.210786 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.210793 4821 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.210812 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.334883 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.380104 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.383179 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.422218 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.495453 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.557486 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.644849 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.687167 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.733023 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.832108 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.844881 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.933705 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.983754 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 09 18:29:16 crc kubenswrapper[4821]: I0309 18:29:16.990785 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.034845 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.039612 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.047779 4821 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.154515 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.164718 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.167946 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.203294 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.328659 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.339563 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.346522 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.406177 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.416314 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.417021 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.426926 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.486550 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.661828 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.753385 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.792943 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.804860 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.845720 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 09 18:29:17 crc kubenswrapper[4821]: I0309 18:29:17.929810 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.062039 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.093718 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.243012 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.247277 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.257902 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.280538 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.287143 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.302486 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.332144 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.427627 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.486285 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.565856 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.689988 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.744651 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.770127 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.793802 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.833174 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.910181 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 18:29:18 crc kubenswrapper[4821]: I0309 18:29:18.983567 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.015961 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.038006 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.229872 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.263644 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.311376 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.346659 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.489898 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.586232 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.646307 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.663027 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.860218 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.864314 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 09 18:29:19 crc kubenswrapper[4821]: I0309 18:29:19.934089 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.063388 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.063515 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.149149 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.193853 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.236404 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.254334 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.333709 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.346153 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.417128 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.452036 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.570620 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.693897 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.815524 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.883187 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 09 18:29:20 crc kubenswrapper[4821]: I0309 18:29:20.989920 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.040017 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.066238 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.129850 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.331708 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.334030 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.382918 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.413626 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.414286 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.422877 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.472572 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.494650 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.531217 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.558288 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.636482 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.725634 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.909844 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.954830 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 18:29:21 crc kubenswrapper[4821]: I0309 18:29:21.955408 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.035147 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.085982 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.271457 4821 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.326211 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.515166 4821 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.521817 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.521897 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.527125 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.549937 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.549908203 podStartE2EDuration="26.549908203s" podCreationTimestamp="2026-03-09 18:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:29:22.541353997 +0000 UTC m=+299.702729893" watchObservedRunningTime="2026-03-09 18:29:22.549908203 +0000 UTC m=+299.711284099" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.797188 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 09 18:29:22 crc kubenswrapper[4821]: I0309 18:29:22.912737 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.102189 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.160839 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.214672 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.270602 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.385081 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.598772 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.638334 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 09 18:29:23 crc kubenswrapper[4821]: I0309 18:29:23.663594 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 09 18:29:24 crc kubenswrapper[4821]: I0309 18:29:24.259188 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 18:29:24 crc kubenswrapper[4821]: I0309 18:29:24.288422 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 09 18:29:29 crc kubenswrapper[4821]: I0309 18:29:29.913562 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:29:29 crc kubenswrapper[4821]: I0309 18:29:29.914133 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:29:30 crc kubenswrapper[4821]: I0309 18:29:30.741370 4821 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 09 18:29:30 crc kubenswrapper[4821]: I0309 18:29:30.742194 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7c81ea312ce1a396b6aabfd8967fc75f7cf75fba41b32cbc232d3bae8c28df51" gracePeriod=5 Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.615144 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sn8zk"] Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.621716 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sn8zk" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="registry-server" containerID="cri-o://d4679044f8495b36e6b6667a3a6958878fb44b68f17fcefb12eaaf574ef27150" gracePeriod=30 Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.626933 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nk4bg"] Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.627196 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nk4bg" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="registry-server" containerID="cri-o://1f8a2551adf9a2ac55ee995478e47d05b3f65844e7b0819cb317bce8bb52574a" gracePeriod=30 Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.648184 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nw2x"] Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.648587 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerName="marketplace-operator" containerID="cri-o://5267639d1b40b8d0a47829649ed4cc773eed9710e4dca98c1041946c1f8334ae" gracePeriod=30 Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.652149 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsq8f"] Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.653102 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nsq8f" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="registry-server" containerID="cri-o://a959fd964c95d575bc8de56dfa58e33cec163f220afab1d11923747c61ac1025" gracePeriod=30 Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.657730 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2h8qw"] Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.692647 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-556c4"] Mar 09 18:29:35 crc kubenswrapper[4821]: E0309 18:29:35.692849 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.692863 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 09 18:29:35 crc kubenswrapper[4821]: E0309 18:29:35.692874 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" containerName="installer" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.692881 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" containerName="installer" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.692995 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcdc187f-6e3b-442c-80a1-e404ee5ebb9e" containerName="installer" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.693006 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.693408 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.704255 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-556c4"] Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.771585 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872fb4be-c421-4274-8646-56e708f8c698-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.771703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5f8\" (UniqueName: \"kubernetes.io/projected/872fb4be-c421-4274-8646-56e708f8c698-kube-api-access-kh5f8\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.771794 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/872fb4be-c421-4274-8646-56e708f8c698-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.873508 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872fb4be-c421-4274-8646-56e708f8c698-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.873547 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5f8\" (UniqueName: \"kubernetes.io/projected/872fb4be-c421-4274-8646-56e708f8c698-kube-api-access-kh5f8\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.873586 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/872fb4be-c421-4274-8646-56e708f8c698-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.874861 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/872fb4be-c421-4274-8646-56e708f8c698-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.880990 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/872fb4be-c421-4274-8646-56e708f8c698-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:35 crc kubenswrapper[4821]: I0309 18:29:35.890700 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5f8\" (UniqueName: \"kubernetes.io/projected/872fb4be-c421-4274-8646-56e708f8c698-kube-api-access-kh5f8\") pod \"marketplace-operator-79b997595-556c4\" (UID: \"872fb4be-c421-4274-8646-56e708f8c698\") " pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.172146 4821 generic.go:334] "Generic (PLEG): container finished" podID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerID="1f8a2551adf9a2ac55ee995478e47d05b3f65844e7b0819cb317bce8bb52574a" exitCode=0 Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.172223 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk4bg" event={"ID":"07a1db8f-6912-4ff8-9943-24c334031dfb","Type":"ContainerDied","Data":"1f8a2551adf9a2ac55ee995478e47d05b3f65844e7b0819cb317bce8bb52574a"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.175408 4821 generic.go:334] "Generic (PLEG): container finished" podID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerID="d4679044f8495b36e6b6667a3a6958878fb44b68f17fcefb12eaaf574ef27150" exitCode=0 Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.175456 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerDied","Data":"d4679044f8495b36e6b6667a3a6958878fb44b68f17fcefb12eaaf574ef27150"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.175517 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sn8zk" event={"ID":"70ed8562-ec3e-49a0-8ccd-885eea90e9c1","Type":"ContainerDied","Data":"2753c51f03bef59bd9f722e0306c0931942762d524f95d8b94ad6cfa70ba0ef1"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.175532 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2753c51f03bef59bd9f722e0306c0931942762d524f95d8b94ad6cfa70ba0ef1" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.181097 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.181234 4821 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7c81ea312ce1a396b6aabfd8967fc75f7cf75fba41b32cbc232d3bae8c28df51" exitCode=137 Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.181497 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="672efbd68e388184077d17d4ad2009a47439bbcd0b46ef61c72cb3b2bfb555a0" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.184874 4821 generic.go:334] "Generic (PLEG): container finished" podID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerID="5267639d1b40b8d0a47829649ed4cc773eed9710e4dca98c1041946c1f8334ae" exitCode=0 Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.184922 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" event={"ID":"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56","Type":"ContainerDied","Data":"5267639d1b40b8d0a47829649ed4cc773eed9710e4dca98c1041946c1f8334ae"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.184955 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" event={"ID":"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56","Type":"ContainerDied","Data":"9ed594c2523b0f25f0a4932798e2253c50820b99548e48d7c16a96c49b959fd0"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.184970 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed594c2523b0f25f0a4932798e2253c50820b99548e48d7c16a96c49b959fd0" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.189131 4821 generic.go:334] "Generic (PLEG): container finished" podID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerID="a959fd964c95d575bc8de56dfa58e33cec163f220afab1d11923747c61ac1025" exitCode=0 Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.189432 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2h8qw" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="registry-server" containerID="cri-o://6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" gracePeriod=30 Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.189814 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerDied","Data":"a959fd964c95d575bc8de56dfa58e33cec163f220afab1d11923747c61ac1025"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.189876 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nsq8f" event={"ID":"132d5224-2c4a-4b22-9e2f-b50b98e3b693","Type":"ContainerDied","Data":"44707abfc8043739daab71658c10299246b9981b6f781130e441157fead4c083"} Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.189896 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44707abfc8043739daab71658c10299246b9981b6f781130e441157fead4c083" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.222584 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.223955 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.224028 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.232999 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.238758 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.245425 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.291425 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:29:36 crc kubenswrapper[4821]: E0309 18:29:36.315489 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec is running failed: container process not found" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" cmd=["grpc_health_probe","-addr=:50051"] Mar 09 18:29:36 crc kubenswrapper[4821]: E0309 18:29:36.315974 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec is running failed: container process not found" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" cmd=["grpc_health_probe","-addr=:50051"] Mar 09 18:29:36 crc kubenswrapper[4821]: E0309 18:29:36.317723 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec is running failed: container process not found" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" cmd=["grpc_health_probe","-addr=:50051"] Mar 09 18:29:36 crc kubenswrapper[4821]: E0309 18:29:36.317768 4821 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-2h8qw" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="registry-server" Mar 09 18:29:36 crc kubenswrapper[4821]: E0309 18:29:36.376808 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabf94109_7b6a_4e4f_a178_42e7d6fc45e0.slice/crio-6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabf94109_7b6a_4e4f_a178_42e7d6fc45e0.slice/crio-conmon-6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec.scope\": RecentStats: unable to find data in memory cache]" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.380608 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-catalog-content\") pod \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.380718 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7w5g\" (UniqueName: \"kubernetes.io/projected/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-kube-api-access-h7w5g\") pod \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.380742 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-utilities\") pod \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.380765 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381010 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-trusted-ca\") pod \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381169 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381203 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-catalog-content\") pod \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381337 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rqw7\" (UniqueName: \"kubernetes.io/projected/132d5224-2c4a-4b22-9e2f-b50b98e3b693-kube-api-access-6rqw7\") pod \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\" (UID: \"132d5224-2c4a-4b22-9e2f-b50b98e3b693\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381369 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-utilities\") pod \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\" (UID: \"70ed8562-ec3e-49a0-8ccd-885eea90e9c1\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381424 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381449 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381469 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381501 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-operator-metrics\") pod \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381494 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.381528 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ttwm\" (UniqueName: \"kubernetes.io/projected/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-kube-api-access-4ttwm\") pod \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\" (UID: \"d3699d56-8c7d-4ccf-9ebf-469d84dc6a56\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.382306 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.382349 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" (UID: "d3699d56-8c7d-4ccf-9ebf-469d84dc6a56"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.382741 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.383005 4821 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.383032 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.383050 4821 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.383064 4821 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.383082 4821 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.383080 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-utilities" (OuterVolumeSpecName: "utilities") pod "70ed8562-ec3e-49a0-8ccd-885eea90e9c1" (UID: "70ed8562-ec3e-49a0-8ccd-885eea90e9c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.384420 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-utilities" (OuterVolumeSpecName: "utilities") pod "132d5224-2c4a-4b22-9e2f-b50b98e3b693" (UID: "132d5224-2c4a-4b22-9e2f-b50b98e3b693"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.394437 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-kube-api-access-4ttwm" (OuterVolumeSpecName: "kube-api-access-4ttwm") pod "d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" (UID: "d3699d56-8c7d-4ccf-9ebf-469d84dc6a56"). InnerVolumeSpecName "kube-api-access-4ttwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.396920 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132d5224-2c4a-4b22-9e2f-b50b98e3b693-kube-api-access-6rqw7" (OuterVolumeSpecName: "kube-api-access-6rqw7") pod "132d5224-2c4a-4b22-9e2f-b50b98e3b693" (UID: "132d5224-2c4a-4b22-9e2f-b50b98e3b693"). InnerVolumeSpecName "kube-api-access-6rqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.401656 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.402236 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-kube-api-access-h7w5g" (OuterVolumeSpecName: "kube-api-access-h7w5g") pod "70ed8562-ec3e-49a0-8ccd-885eea90e9c1" (UID: "70ed8562-ec3e-49a0-8ccd-885eea90e9c1"). InnerVolumeSpecName "kube-api-access-h7w5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.402374 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" (UID: "d3699d56-8c7d-4ccf-9ebf-469d84dc6a56"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.412141 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "132d5224-2c4a-4b22-9e2f-b50b98e3b693" (UID: "132d5224-2c4a-4b22-9e2f-b50b98e3b693"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.463242 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70ed8562-ec3e-49a0-8ccd-885eea90e9c1" (UID: "70ed8562-ec3e-49a0-8ccd-885eea90e9c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.483935 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-utilities\") pod \"07a1db8f-6912-4ff8-9943-24c334031dfb\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.483990 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-catalog-content\") pod \"07a1db8f-6912-4ff8-9943-24c334031dfb\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484030 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jw8d\" (UniqueName: \"kubernetes.io/projected/07a1db8f-6912-4ff8-9943-24c334031dfb-kube-api-access-5jw8d\") pod \"07a1db8f-6912-4ff8-9943-24c334031dfb\" (UID: \"07a1db8f-6912-4ff8-9943-24c334031dfb\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484391 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484412 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rqw7\" (UniqueName: \"kubernetes.io/projected/132d5224-2c4a-4b22-9e2f-b50b98e3b693-kube-api-access-6rqw7\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484428 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484443 4821 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484456 4821 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484469 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ttwm\" (UniqueName: \"kubernetes.io/projected/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56-kube-api-access-4ttwm\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484478 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484488 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7w5g\" (UniqueName: \"kubernetes.io/projected/70ed8562-ec3e-49a0-8ccd-885eea90e9c1-kube-api-access-h7w5g\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484498 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132d5224-2c4a-4b22-9e2f-b50b98e3b693-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.484770 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-utilities" (OuterVolumeSpecName: "utilities") pod "07a1db8f-6912-4ff8-9943-24c334031dfb" (UID: "07a1db8f-6912-4ff8-9943-24c334031dfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.488020 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a1db8f-6912-4ff8-9943-24c334031dfb-kube-api-access-5jw8d" (OuterVolumeSpecName: "kube-api-access-5jw8d") pod "07a1db8f-6912-4ff8-9943-24c334031dfb" (UID: "07a1db8f-6912-4ff8-9943-24c334031dfb"). InnerVolumeSpecName "kube-api-access-5jw8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.543540 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07a1db8f-6912-4ff8-9943-24c334031dfb" (UID: "07a1db8f-6912-4ff8-9943-24c334031dfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.586377 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jw8d\" (UniqueName: \"kubernetes.io/projected/07a1db8f-6912-4ff8-9943-24c334031dfb-kube-api-access-5jw8d\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.586430 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.586443 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a1db8f-6912-4ff8-9943-24c334031dfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.613702 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.713478 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-556c4"] Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.788043 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-utilities\") pod \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.788111 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-catalog-content\") pod \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.788176 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz6f6\" (UniqueName: \"kubernetes.io/projected/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-kube-api-access-qz6f6\") pod \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\" (UID: \"abf94109-7b6a-4e4f-a178-42e7d6fc45e0\") " Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.789762 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-utilities" (OuterVolumeSpecName: "utilities") pod "abf94109-7b6a-4e4f-a178-42e7d6fc45e0" (UID: "abf94109-7b6a-4e4f-a178-42e7d6fc45e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.794253 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-kube-api-access-qz6f6" (OuterVolumeSpecName: "kube-api-access-qz6f6") pod "abf94109-7b6a-4e4f-a178-42e7d6fc45e0" (UID: "abf94109-7b6a-4e4f-a178-42e7d6fc45e0"). InnerVolumeSpecName "kube-api-access-qz6f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.889490 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.889726 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz6f6\" (UniqueName: \"kubernetes.io/projected/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-kube-api-access-qz6f6\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.936806 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abf94109-7b6a-4e4f-a178-42e7d6fc45e0" (UID: "abf94109-7b6a-4e4f-a178-42e7d6fc45e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:29:36 crc kubenswrapper[4821]: I0309 18:29:36.991126 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abf94109-7b6a-4e4f-a178-42e7d6fc45e0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.196829 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" event={"ID":"872fb4be-c421-4274-8646-56e708f8c698","Type":"ContainerStarted","Data":"26c4a6aab4a735c1d1b0e9af5bbc28f52c548fe1003a5460aaa82a74e1475664"} Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.196879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" event={"ID":"872fb4be-c421-4274-8646-56e708f8c698","Type":"ContainerStarted","Data":"31fef98b345155e6236512a2465036b1cc4bbabf8fa4105f7c600449879a76bd"} Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.197053 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.199057 4821 generic.go:334] "Generic (PLEG): container finished" podID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" exitCode=0 Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.199086 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerDied","Data":"6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec"} Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.199137 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8qw" event={"ID":"abf94109-7b6a-4e4f-a178-42e7d6fc45e0","Type":"ContainerDied","Data":"43be8128b41711a1b74e065d7322472332257449ab4a307ea698e7039e2243ab"} Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.199158 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8qw" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.199190 4821 scope.go:117] "RemoveContainer" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.200997 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.201802 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sn8zk" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.202443 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk4bg" event={"ID":"07a1db8f-6912-4ff8-9943-24c334031dfb","Type":"ContainerDied","Data":"85a8bca09db8732d24af0b309e1b26277bb38db2b96719905012c11a1f9088e9"} Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.202464 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.202483 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nsq8f" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.202521 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7nw2x" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.205442 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk4bg" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.218666 4821 scope.go:117] "RemoveContainer" containerID="7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.222336 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-556c4" podStartSLOduration=2.222295685 podStartE2EDuration="2.222295685s" podCreationTimestamp="2026-03-09 18:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:29:37.215228891 +0000 UTC m=+314.376604757" watchObservedRunningTime="2026-03-09 18:29:37.222295685 +0000 UTC m=+314.383671541" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.259587 4821 scope.go:117] "RemoveContainer" containerID="9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.284135 4821 scope.go:117] "RemoveContainer" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" Mar 09 18:29:37 crc kubenswrapper[4821]: E0309 18:29:37.284775 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec\": container with ID starting with 6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec not found: ID does not exist" containerID="6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.284853 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec"} err="failed to get container status \"6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec\": rpc error: code = NotFound desc = could not find container \"6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec\": container with ID starting with 6238c63ab3036cbee2bbb436181131defd84aa4cf4b6bce1a94d9f4b914b35ec not found: ID does not exist" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.285764 4821 scope.go:117] "RemoveContainer" containerID="7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490" Mar 09 18:29:37 crc kubenswrapper[4821]: E0309 18:29:37.286185 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490\": container with ID starting with 7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490 not found: ID does not exist" containerID="7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.286267 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490"} err="failed to get container status \"7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490\": rpc error: code = NotFound desc = could not find container \"7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490\": container with ID starting with 7d8d0264b15f150e64278b8ff2dcf6f5312c3c93592f008ead630fff9b028490 not found: ID does not exist" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.286336 4821 scope.go:117] "RemoveContainer" containerID="9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269" Mar 09 18:29:37 crc kubenswrapper[4821]: E0309 18:29:37.286743 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269\": container with ID starting with 9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269 not found: ID does not exist" containerID="9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.286784 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269"} err="failed to get container status \"9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269\": rpc error: code = NotFound desc = could not find container \"9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269\": container with ID starting with 9b5b794d39f3fde070d372659dc15ce96b84e44673966398ad89c65e0287e269 not found: ID does not exist" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.286806 4821 scope.go:117] "RemoveContainer" containerID="1f8a2551adf9a2ac55ee995478e47d05b3f65844e7b0819cb317bce8bb52574a" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.306931 4821 scope.go:117] "RemoveContainer" containerID="25d0aba4e52a42db77d899b2e7643b0a5d1273a72079b5d03e364bf9e3db4813" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.306931 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sn8zk"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.317493 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sn8zk"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.323043 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsq8f"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.328476 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nsq8f"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.329716 4821 scope.go:117] "RemoveContainer" containerID="13fef06871fa0a4e0871aaad1057236ae03da0e7577c7ae35df29a4adf7b9028" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.342485 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2h8qw"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.358216 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2h8qw"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.363263 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nw2x"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.370071 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7nw2x"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.374688 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nk4bg"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.379148 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nk4bg"] Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.559722 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" path="/var/lib/kubelet/pods/07a1db8f-6912-4ff8-9943-24c334031dfb/volumes" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.560629 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" path="/var/lib/kubelet/pods/132d5224-2c4a-4b22-9e2f-b50b98e3b693/volumes" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.561472 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" path="/var/lib/kubelet/pods/70ed8562-ec3e-49a0-8ccd-885eea90e9c1/volumes" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.562879 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" path="/var/lib/kubelet/pods/abf94109-7b6a-4e4f-a178-42e7d6fc45e0/volumes" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.564131 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" path="/var/lib/kubelet/pods/d3699d56-8c7d-4ccf-9ebf-469d84dc6a56/volumes" Mar 09 18:29:37 crc kubenswrapper[4821]: I0309 18:29:37.565286 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 09 18:29:44 crc kubenswrapper[4821]: I0309 18:29:44.249409 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 09 18:29:44 crc kubenswrapper[4821]: I0309 18:29:44.252952 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 09 18:29:44 crc kubenswrapper[4821]: I0309 18:29:44.254420 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 09 18:29:44 crc kubenswrapper[4821]: I0309 18:29:44.254504 4821 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="23464a2c40cf9e64f2e6eab0c203cc62685ab91ab392a0eda388e7658dcb4f24" exitCode=137 Mar 09 18:29:44 crc kubenswrapper[4821]: I0309 18:29:44.254551 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"23464a2c40cf9e64f2e6eab0c203cc62685ab91ab392a0eda388e7658dcb4f24"} Mar 09 18:29:44 crc kubenswrapper[4821]: I0309 18:29:44.254600 4821 scope.go:117] "RemoveContainer" containerID="36e1fee3fa3c896bb4fd7fd76b19cbc801f2d24b463344591a55ed9c940f0d8c" Mar 09 18:29:45 crc kubenswrapper[4821]: I0309 18:29:45.289833 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 09 18:29:45 crc kubenswrapper[4821]: I0309 18:29:45.298884 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 09 18:29:45 crc kubenswrapper[4821]: I0309 18:29:45.299096 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b4f387cfa67ab7eb1cceb46ac05630c635d9042d071f78817fa9a96b3bf5b99"} Mar 09 18:29:49 crc kubenswrapper[4821]: I0309 18:29:49.982636 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:29:53 crc kubenswrapper[4821]: I0309 18:29:53.430381 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:29:53 crc kubenswrapper[4821]: I0309 18:29:53.434368 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:29:54 crc kubenswrapper[4821]: I0309 18:29:54.365573 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 09 18:29:59 crc kubenswrapper[4821]: I0309 18:29:59.913573 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:29:59 crc kubenswrapper[4821]: I0309 18:29:59.915312 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:29:59 crc kubenswrapper[4821]: I0309 18:29:59.915689 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:29:59 crc kubenswrapper[4821]: I0309 18:29:59.916764 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:29:59 crc kubenswrapper[4821]: I0309 18:29:59.917098 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777" gracePeriod=600 Mar 09 18:30:00 crc kubenswrapper[4821]: I0309 18:30:00.405448 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777" exitCode=0 Mar 09 18:30:00 crc kubenswrapper[4821]: I0309 18:30:00.405585 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777"} Mar 09 18:30:00 crc kubenswrapper[4821]: I0309 18:30:00.407218 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"23cc64d2d10a8b69113d207c0a3d0a0de2d2f613ac820eaa318a413143f856a4"} Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292139 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7"] Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292805 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292818 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292832 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292837 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292846 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292852 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292860 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292866 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292877 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292882 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292891 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292896 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292905 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292910 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292919 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292924 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="extract-content" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292933 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerName="marketplace-operator" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292939 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerName="marketplace-operator" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292948 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292953 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292961 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292966 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292974 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292979 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="extract-utilities" Mar 09 18:30:03 crc kubenswrapper[4821]: E0309 18:30:03.292986 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.292993 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.293067 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf94109-7b6a-4e4f-a178-42e7d6fc45e0" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.293075 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3699d56-8c7d-4ccf-9ebf-469d84dc6a56" containerName="marketplace-operator" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.293082 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ed8562-ec3e-49a0-8ccd-885eea90e9c1" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.293090 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="132d5224-2c4a-4b22-9e2f-b50b98e3b693" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.293098 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a1db8f-6912-4ff8-9943-24c334031dfb" containerName="registry-server" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.293444 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.295028 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.295131 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.300688 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7"] Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.375701 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtxx\" (UniqueName: \"kubernetes.io/projected/9b2d4a49-67a2-4a60-98ac-a10446691d92-kube-api-access-4xtxx\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.375753 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b2d4a49-67a2-4a60-98ac-a10446691d92-config-volume\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.375773 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b2d4a49-67a2-4a60-98ac-a10446691d92-secret-volume\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.397029 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551350-s928p"] Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.397844 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.401217 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.401217 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.402726 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.408057 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551350-s928p"] Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.477263 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtxx\" (UniqueName: \"kubernetes.io/projected/9b2d4a49-67a2-4a60-98ac-a10446691d92-kube-api-access-4xtxx\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.477363 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6v7\" (UniqueName: \"kubernetes.io/projected/a1e1c786-8f5d-4b94-b547-73982770d24a-kube-api-access-5j6v7\") pod \"auto-csr-approver-29551350-s928p\" (UID: \"a1e1c786-8f5d-4b94-b547-73982770d24a\") " pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.477392 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b2d4a49-67a2-4a60-98ac-a10446691d92-config-volume\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.477413 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b2d4a49-67a2-4a60-98ac-a10446691d92-secret-volume\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.478270 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b2d4a49-67a2-4a60-98ac-a10446691d92-config-volume\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.482573 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b2d4a49-67a2-4a60-98ac-a10446691d92-secret-volume\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.491763 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtxx\" (UniqueName: \"kubernetes.io/projected/9b2d4a49-67a2-4a60-98ac-a10446691d92-kube-api-access-4xtxx\") pod \"collect-profiles-29551350-l7kv7\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.577709 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6v7\" (UniqueName: \"kubernetes.io/projected/a1e1c786-8f5d-4b94-b547-73982770d24a-kube-api-access-5j6v7\") pod \"auto-csr-approver-29551350-s928p\" (UID: \"a1e1c786-8f5d-4b94-b547-73982770d24a\") " pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.598066 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6v7\" (UniqueName: \"kubernetes.io/projected/a1e1c786-8f5d-4b94-b547-73982770d24a-kube-api-access-5j6v7\") pod \"auto-csr-approver-29551350-s928p\" (UID: \"a1e1c786-8f5d-4b94-b547-73982770d24a\") " pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.613817 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.712099 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:03 crc kubenswrapper[4821]: I0309 18:30:03.909992 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551350-s928p"] Mar 09 18:30:04 crc kubenswrapper[4821]: I0309 18:30:04.045434 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7"] Mar 09 18:30:04 crc kubenswrapper[4821]: W0309 18:30:04.052171 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b2d4a49_67a2_4a60_98ac_a10446691d92.slice/crio-87180abcd334058413a570e627ea0357049fd574469f1fd0c11e2079d1a976e7 WatchSource:0}: Error finding container 87180abcd334058413a570e627ea0357049fd574469f1fd0c11e2079d1a976e7: Status 404 returned error can't find the container with id 87180abcd334058413a570e627ea0357049fd574469f1fd0c11e2079d1a976e7 Mar 09 18:30:04 crc kubenswrapper[4821]: I0309 18:30:04.432017 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551350-s928p" event={"ID":"a1e1c786-8f5d-4b94-b547-73982770d24a","Type":"ContainerStarted","Data":"4a3fc544c225f1a6b8b0855e0f1f08ecc8de2c06c4f54bf82e20d04380fa65eb"} Mar 09 18:30:04 crc kubenswrapper[4821]: I0309 18:30:04.433369 4821 generic.go:334] "Generic (PLEG): container finished" podID="9b2d4a49-67a2-4a60-98ac-a10446691d92" containerID="1506c69808faa18de9959794c4113dfd395ca295870c5e0012b7c89297d8dca6" exitCode=0 Mar 09 18:30:04 crc kubenswrapper[4821]: I0309 18:30:04.433407 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" event={"ID":"9b2d4a49-67a2-4a60-98ac-a10446691d92","Type":"ContainerDied","Data":"1506c69808faa18de9959794c4113dfd395ca295870c5e0012b7c89297d8dca6"} Mar 09 18:30:04 crc kubenswrapper[4821]: I0309 18:30:04.433439 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" event={"ID":"9b2d4a49-67a2-4a60-98ac-a10446691d92","Type":"ContainerStarted","Data":"87180abcd334058413a570e627ea0357049fd574469f1fd0c11e2079d1a976e7"} Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.274602 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.414990 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b2d4a49-67a2-4a60-98ac-a10446691d92-config-volume\") pod \"9b2d4a49-67a2-4a60-98ac-a10446691d92\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.415089 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b2d4a49-67a2-4a60-98ac-a10446691d92-secret-volume\") pod \"9b2d4a49-67a2-4a60-98ac-a10446691d92\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.415127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xtxx\" (UniqueName: \"kubernetes.io/projected/9b2d4a49-67a2-4a60-98ac-a10446691d92-kube-api-access-4xtxx\") pod \"9b2d4a49-67a2-4a60-98ac-a10446691d92\" (UID: \"9b2d4a49-67a2-4a60-98ac-a10446691d92\") " Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.416135 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b2d4a49-67a2-4a60-98ac-a10446691d92-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b2d4a49-67a2-4a60-98ac-a10446691d92" (UID: "9b2d4a49-67a2-4a60-98ac-a10446691d92"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.420921 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2d4a49-67a2-4a60-98ac-a10446691d92-kube-api-access-4xtxx" (OuterVolumeSpecName: "kube-api-access-4xtxx") pod "9b2d4a49-67a2-4a60-98ac-a10446691d92" (UID: "9b2d4a49-67a2-4a60-98ac-a10446691d92"). InnerVolumeSpecName "kube-api-access-4xtxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.421085 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2d4a49-67a2-4a60-98ac-a10446691d92-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b2d4a49-67a2-4a60-98ac-a10446691d92" (UID: "9b2d4a49-67a2-4a60-98ac-a10446691d92"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.476017 4821 generic.go:334] "Generic (PLEG): container finished" podID="a1e1c786-8f5d-4b94-b547-73982770d24a" containerID="e98ef0f424c86fe19c85cc1e186363df3a218636518e22396d35b5192f9ebd14" exitCode=0 Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.476101 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551350-s928p" event={"ID":"a1e1c786-8f5d-4b94-b547-73982770d24a","Type":"ContainerDied","Data":"e98ef0f424c86fe19c85cc1e186363df3a218636518e22396d35b5192f9ebd14"} Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.477259 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" event={"ID":"9b2d4a49-67a2-4a60-98ac-a10446691d92","Type":"ContainerDied","Data":"87180abcd334058413a570e627ea0357049fd574469f1fd0c11e2079d1a976e7"} Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.477296 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.477302 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87180abcd334058413a570e627ea0357049fd574469f1fd0c11e2079d1a976e7" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.516239 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b2d4a49-67a2-4a60-98ac-a10446691d92-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.516271 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xtxx\" (UniqueName: \"kubernetes.io/projected/9b2d4a49-67a2-4a60-98ac-a10446691d92-kube-api-access-4xtxx\") on node \"crc\" DevicePath \"\"" Mar 09 18:30:06 crc kubenswrapper[4821]: I0309 18:30:06.516279 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b2d4a49-67a2-4a60-98ac-a10446691d92-config-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:30:07 crc kubenswrapper[4821]: I0309 18:30:07.750540 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:07 crc kubenswrapper[4821]: I0309 18:30:07.839630 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6v7\" (UniqueName: \"kubernetes.io/projected/a1e1c786-8f5d-4b94-b547-73982770d24a-kube-api-access-5j6v7\") pod \"a1e1c786-8f5d-4b94-b547-73982770d24a\" (UID: \"a1e1c786-8f5d-4b94-b547-73982770d24a\") " Mar 09 18:30:07 crc kubenswrapper[4821]: I0309 18:30:07.845279 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e1c786-8f5d-4b94-b547-73982770d24a-kube-api-access-5j6v7" (OuterVolumeSpecName: "kube-api-access-5j6v7") pod "a1e1c786-8f5d-4b94-b547-73982770d24a" (UID: "a1e1c786-8f5d-4b94-b547-73982770d24a"). InnerVolumeSpecName "kube-api-access-5j6v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:30:07 crc kubenswrapper[4821]: I0309 18:30:07.940985 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j6v7\" (UniqueName: \"kubernetes.io/projected/a1e1c786-8f5d-4b94-b547-73982770d24a-kube-api-access-5j6v7\") on node \"crc\" DevicePath \"\"" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.059817 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tc2f5"] Mar 09 18:30:08 crc kubenswrapper[4821]: E0309 18:30:08.060528 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2d4a49-67a2-4a60-98ac-a10446691d92" containerName="collect-profiles" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.060547 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2d4a49-67a2-4a60-98ac-a10446691d92" containerName="collect-profiles" Mar 09 18:30:08 crc kubenswrapper[4821]: E0309 18:30:08.060559 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e1c786-8f5d-4b94-b547-73982770d24a" containerName="oc" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.060566 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e1c786-8f5d-4b94-b547-73982770d24a" containerName="oc" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.060656 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2d4a49-67a2-4a60-98ac-a10446691d92" containerName="collect-profiles" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.060669 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e1c786-8f5d-4b94-b547-73982770d24a" containerName="oc" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.061482 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.063109 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.070962 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tc2f5"] Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.143677 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626ph\" (UniqueName: \"kubernetes.io/projected/f077b409-1e21-4fb0-a973-8c57822d2b94-kube-api-access-626ph\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.143982 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077b409-1e21-4fb0-a973-8c57822d2b94-utilities\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.144110 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077b409-1e21-4fb0-a973-8c57822d2b94-catalog-content\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.245199 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626ph\" (UniqueName: \"kubernetes.io/projected/f077b409-1e21-4fb0-a973-8c57822d2b94-kube-api-access-626ph\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.245630 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077b409-1e21-4fb0-a973-8c57822d2b94-utilities\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.245811 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077b409-1e21-4fb0-a973-8c57822d2b94-catalog-content\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.246624 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f077b409-1e21-4fb0-a973-8c57822d2b94-utilities\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.246673 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f077b409-1e21-4fb0-a973-8c57822d2b94-catalog-content\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.262904 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626ph\" (UniqueName: \"kubernetes.io/projected/f077b409-1e21-4fb0-a973-8c57822d2b94-kube-api-access-626ph\") pod \"redhat-operators-tc2f5\" (UID: \"f077b409-1e21-4fb0-a973-8c57822d2b94\") " pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.378610 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.494858 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551350-s928p" event={"ID":"a1e1c786-8f5d-4b94-b547-73982770d24a","Type":"ContainerDied","Data":"4a3fc544c225f1a6b8b0855e0f1f08ecc8de2c06c4f54bf82e20d04380fa65eb"} Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.495145 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a3fc544c225f1a6b8b0855e0f1f08ecc8de2c06c4f54bf82e20d04380fa65eb" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.495208 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551350-s928p" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.645102 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tc2f5"] Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.653542 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-29tmk"] Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.657078 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.659508 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.672723 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29tmk"] Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.751293 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011ab61a-9a65-4112-8ab5-149d78479cc4-utilities\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.751381 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011ab61a-9a65-4112-8ab5-149d78479cc4-catalog-content\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.751455 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvqnw\" (UniqueName: \"kubernetes.io/projected/011ab61a-9a65-4112-8ab5-149d78479cc4-kube-api-access-nvqnw\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.853197 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011ab61a-9a65-4112-8ab5-149d78479cc4-utilities\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.853751 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011ab61a-9a65-4112-8ab5-149d78479cc4-catalog-content\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.853911 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvqnw\" (UniqueName: \"kubernetes.io/projected/011ab61a-9a65-4112-8ab5-149d78479cc4-kube-api-access-nvqnw\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.853917 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011ab61a-9a65-4112-8ab5-149d78479cc4-utilities\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.854585 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011ab61a-9a65-4112-8ab5-149d78479cc4-catalog-content\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.878848 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvqnw\" (UniqueName: \"kubernetes.io/projected/011ab61a-9a65-4112-8ab5-149d78479cc4-kube-api-access-nvqnw\") pod \"community-operators-29tmk\" (UID: \"011ab61a-9a65-4112-8ab5-149d78479cc4\") " pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:08 crc kubenswrapper[4821]: I0309 18:30:08.981484 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:09 crc kubenswrapper[4821]: W0309 18:30:09.391950 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod011ab61a_9a65_4112_8ab5_149d78479cc4.slice/crio-762e06527a32c905c5556caec30929ca21de8a9547e663816ba334b0111532e8 WatchSource:0}: Error finding container 762e06527a32c905c5556caec30929ca21de8a9547e663816ba334b0111532e8: Status 404 returned error can't find the container with id 762e06527a32c905c5556caec30929ca21de8a9547e663816ba334b0111532e8 Mar 09 18:30:09 crc kubenswrapper[4821]: I0309 18:30:09.394687 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-29tmk"] Mar 09 18:30:09 crc kubenswrapper[4821]: I0309 18:30:09.504219 4821 generic.go:334] "Generic (PLEG): container finished" podID="f077b409-1e21-4fb0-a973-8c57822d2b94" containerID="35f31fb7e3eb528c5adc15061d02a6a348a3623068c0f6f38a8d686e917d95ec" exitCode=0 Mar 09 18:30:09 crc kubenswrapper[4821]: I0309 18:30:09.504358 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc2f5" event={"ID":"f077b409-1e21-4fb0-a973-8c57822d2b94","Type":"ContainerDied","Data":"35f31fb7e3eb528c5adc15061d02a6a348a3623068c0f6f38a8d686e917d95ec"} Mar 09 18:30:09 crc kubenswrapper[4821]: I0309 18:30:09.504398 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc2f5" event={"ID":"f077b409-1e21-4fb0-a973-8c57822d2b94","Type":"ContainerStarted","Data":"f04ea22f3e57830f3e2a22be9dac8a610d37090a1c4a9e81b9929fd00a016235"} Mar 09 18:30:09 crc kubenswrapper[4821]: I0309 18:30:09.507541 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29tmk" event={"ID":"011ab61a-9a65-4112-8ab5-149d78479cc4","Type":"ContainerStarted","Data":"762e06527a32c905c5556caec30929ca21de8a9547e663816ba334b0111532e8"} Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.453088 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pchlh"] Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.454543 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.456840 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.464945 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pchlh"] Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.476051 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf279\" (UniqueName: \"kubernetes.io/projected/6a1328a9-ebc5-4976-8ed0-45de86204b20-kube-api-access-hf279\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.476093 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1328a9-ebc5-4976-8ed0-45de86204b20-catalog-content\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.476119 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1328a9-ebc5-4976-8ed0-45de86204b20-utilities\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.517161 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc2f5" event={"ID":"f077b409-1e21-4fb0-a973-8c57822d2b94","Type":"ContainerStarted","Data":"6541898aecda2ff480645cbef7b0f9b8c8a6f2924721e2dbb1304da2cbdf93eb"} Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.523066 4821 generic.go:334] "Generic (PLEG): container finished" podID="011ab61a-9a65-4112-8ab5-149d78479cc4" containerID="07694e8f7ed996cb9a0ec6a532c1f4bbd2d5d69c7addfacdf885d91a0b681bc5" exitCode=0 Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.523148 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29tmk" event={"ID":"011ab61a-9a65-4112-8ab5-149d78479cc4","Type":"ContainerDied","Data":"07694e8f7ed996cb9a0ec6a532c1f4bbd2d5d69c7addfacdf885d91a0b681bc5"} Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.578572 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf279\" (UniqueName: \"kubernetes.io/projected/6a1328a9-ebc5-4976-8ed0-45de86204b20-kube-api-access-hf279\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.578632 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1328a9-ebc5-4976-8ed0-45de86204b20-catalog-content\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.578720 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1328a9-ebc5-4976-8ed0-45de86204b20-utilities\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.579083 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1328a9-ebc5-4976-8ed0-45de86204b20-catalog-content\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.579377 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1328a9-ebc5-4976-8ed0-45de86204b20-utilities\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.611563 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf279\" (UniqueName: \"kubernetes.io/projected/6a1328a9-ebc5-4976-8ed0-45de86204b20-kube-api-access-hf279\") pod \"redhat-marketplace-pchlh\" (UID: \"6a1328a9-ebc5-4976-8ed0-45de86204b20\") " pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:10 crc kubenswrapper[4821]: I0309 18:30:10.775930 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.050429 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9cfmg"] Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.051903 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.060559 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.066575 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9cfmg"] Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.086374 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-catalog-content\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.086453 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-utilities\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.086481 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff48p\" (UniqueName: \"kubernetes.io/projected/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-kube-api-access-ff48p\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.187312 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-utilities\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.187370 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff48p\" (UniqueName: \"kubernetes.io/projected/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-kube-api-access-ff48p\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.187432 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-catalog-content\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.187725 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-utilities\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.187828 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-catalog-content\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.209166 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff48p\" (UniqueName: \"kubernetes.io/projected/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-kube-api-access-ff48p\") pod \"certified-operators-9cfmg\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.233242 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pchlh"] Mar 09 18:30:11 crc kubenswrapper[4821]: W0309 18:30:11.238473 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a1328a9_ebc5_4976_8ed0_45de86204b20.slice/crio-05be61f394d9258339ee5aae2b6f3758c69b0ba5f667a38d0d8db0faf176b1f5 WatchSource:0}: Error finding container 05be61f394d9258339ee5aae2b6f3758c69b0ba5f667a38d0d8db0faf176b1f5: Status 404 returned error can't find the container with id 05be61f394d9258339ee5aae2b6f3758c69b0ba5f667a38d0d8db0faf176b1f5 Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.369953 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.530118 4821 generic.go:334] "Generic (PLEG): container finished" podID="f077b409-1e21-4fb0-a973-8c57822d2b94" containerID="6541898aecda2ff480645cbef7b0f9b8c8a6f2924721e2dbb1304da2cbdf93eb" exitCode=0 Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.530225 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc2f5" event={"ID":"f077b409-1e21-4fb0-a973-8c57822d2b94","Type":"ContainerDied","Data":"6541898aecda2ff480645cbef7b0f9b8c8a6f2924721e2dbb1304da2cbdf93eb"} Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.538659 4821 generic.go:334] "Generic (PLEG): container finished" podID="6a1328a9-ebc5-4976-8ed0-45de86204b20" containerID="0366ceaab88eceb19c444f3d30a5f1fe29e18261647dd36701b403184c0d4085" exitCode=0 Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.538702 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pchlh" event={"ID":"6a1328a9-ebc5-4976-8ed0-45de86204b20","Type":"ContainerDied","Data":"0366ceaab88eceb19c444f3d30a5f1fe29e18261647dd36701b403184c0d4085"} Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.538731 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pchlh" event={"ID":"6a1328a9-ebc5-4976-8ed0-45de86204b20","Type":"ContainerStarted","Data":"05be61f394d9258339ee5aae2b6f3758c69b0ba5f667a38d0d8db0faf176b1f5"} Mar 09 18:30:11 crc kubenswrapper[4821]: I0309 18:30:11.776625 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9cfmg"] Mar 09 18:30:11 crc kubenswrapper[4821]: W0309 18:30:11.786875 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c0b89a_7aa2_44d9_93b3_87c4a29220d5.slice/crio-907164eb1f90e046f6d5ff6ea066a235984f6864f3ec3e706838a1194bbb617b WatchSource:0}: Error finding container 907164eb1f90e046f6d5ff6ea066a235984f6864f3ec3e706838a1194bbb617b: Status 404 returned error can't find the container with id 907164eb1f90e046f6d5ff6ea066a235984f6864f3ec3e706838a1194bbb617b Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.550293 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tc2f5" event={"ID":"f077b409-1e21-4fb0-a973-8c57822d2b94","Type":"ContainerStarted","Data":"c600c22b868d0557f561efd009d11804afa2493a9bd6987d8ef84fd08b7313a5"} Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.552546 4821 generic.go:334] "Generic (PLEG): container finished" podID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerID="b86197b12d885778545e545a7a4a1e6f89f28d29755afef2608aaaf24bd8cf4e" exitCode=0 Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.552621 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cfmg" event={"ID":"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5","Type":"ContainerDied","Data":"b86197b12d885778545e545a7a4a1e6f89f28d29755afef2608aaaf24bd8cf4e"} Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.552649 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cfmg" event={"ID":"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5","Type":"ContainerStarted","Data":"907164eb1f90e046f6d5ff6ea066a235984f6864f3ec3e706838a1194bbb617b"} Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.554544 4821 generic.go:334] "Generic (PLEG): container finished" podID="011ab61a-9a65-4112-8ab5-149d78479cc4" containerID="ce4363af7aad60953bbc696a59d8109de7157e09882c20d80774941c63d7556c" exitCode=0 Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.554570 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29tmk" event={"ID":"011ab61a-9a65-4112-8ab5-149d78479cc4","Type":"ContainerDied","Data":"ce4363af7aad60953bbc696a59d8109de7157e09882c20d80774941c63d7556c"} Mar 09 18:30:12 crc kubenswrapper[4821]: I0309 18:30:12.581478 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tc2f5" podStartSLOduration=2.100729143 podStartE2EDuration="4.581455263s" podCreationTimestamp="2026-03-09 18:30:08 +0000 UTC" firstStartedPulling="2026-03-09 18:30:09.507484454 +0000 UTC m=+346.668860320" lastFinishedPulling="2026-03-09 18:30:11.988210584 +0000 UTC m=+349.149586440" observedRunningTime="2026-03-09 18:30:12.568528147 +0000 UTC m=+349.729904013" watchObservedRunningTime="2026-03-09 18:30:12.581455263 +0000 UTC m=+349.742831119" Mar 09 18:30:13 crc kubenswrapper[4821]: I0309 18:30:13.564862 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-29tmk" event={"ID":"011ab61a-9a65-4112-8ab5-149d78479cc4","Type":"ContainerStarted","Data":"cabd10ba21c5583b2defdc9cd45e658714940054294eaa02903860277812617b"} Mar 09 18:30:13 crc kubenswrapper[4821]: I0309 18:30:13.567632 4821 generic.go:334] "Generic (PLEG): container finished" podID="6a1328a9-ebc5-4976-8ed0-45de86204b20" containerID="614b3df88dfc176872dd31451bb4a0c98bcd6f37df0fb63ecb19f7335ae01d1f" exitCode=0 Mar 09 18:30:13 crc kubenswrapper[4821]: I0309 18:30:13.567696 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pchlh" event={"ID":"6a1328a9-ebc5-4976-8ed0-45de86204b20","Type":"ContainerDied","Data":"614b3df88dfc176872dd31451bb4a0c98bcd6f37df0fb63ecb19f7335ae01d1f"} Mar 09 18:30:13 crc kubenswrapper[4821]: I0309 18:30:13.593236 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-29tmk" podStartSLOduration=3.129349682 podStartE2EDuration="5.593216577s" podCreationTimestamp="2026-03-09 18:30:08 +0000 UTC" firstStartedPulling="2026-03-09 18:30:10.525387637 +0000 UTC m=+347.686763493" lastFinishedPulling="2026-03-09 18:30:12.989254532 +0000 UTC m=+350.150630388" observedRunningTime="2026-03-09 18:30:13.591437298 +0000 UTC m=+350.752813184" watchObservedRunningTime="2026-03-09 18:30:13.593216577 +0000 UTC m=+350.754592443" Mar 09 18:30:14 crc kubenswrapper[4821]: I0309 18:30:14.574213 4821 generic.go:334] "Generic (PLEG): container finished" podID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerID="e05218d399182bfe218d1d3439e4fee34992f38c580315f53cbc40c547a85c94" exitCode=0 Mar 09 18:30:14 crc kubenswrapper[4821]: I0309 18:30:14.574309 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cfmg" event={"ID":"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5","Type":"ContainerDied","Data":"e05218d399182bfe218d1d3439e4fee34992f38c580315f53cbc40c547a85c94"} Mar 09 18:30:15 crc kubenswrapper[4821]: I0309 18:30:15.581104 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pchlh" event={"ID":"6a1328a9-ebc5-4976-8ed0-45de86204b20","Type":"ContainerStarted","Data":"57a143afd667cc0b0ef1af3521afbb597c6b2171212a138e153598afd4286ebd"} Mar 09 18:30:15 crc kubenswrapper[4821]: I0309 18:30:15.597288 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cfmg" event={"ID":"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5","Type":"ContainerStarted","Data":"8ece91230ceff6800a46592ae0746cc8e6d9d1c02a56ec3a1732061a8870a57d"} Mar 09 18:30:15 crc kubenswrapper[4821]: I0309 18:30:15.612919 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pchlh" podStartSLOduration=2.38648263 podStartE2EDuration="5.61290146s" podCreationTimestamp="2026-03-09 18:30:10 +0000 UTC" firstStartedPulling="2026-03-09 18:30:11.547458146 +0000 UTC m=+348.708834002" lastFinishedPulling="2026-03-09 18:30:14.773876966 +0000 UTC m=+351.935252832" observedRunningTime="2026-03-09 18:30:15.610062141 +0000 UTC m=+352.771438017" watchObservedRunningTime="2026-03-09 18:30:15.61290146 +0000 UTC m=+352.774277306" Mar 09 18:30:15 crc kubenswrapper[4821]: I0309 18:30:15.629037 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9cfmg" podStartSLOduration=1.961503607 podStartE2EDuration="4.629021234s" podCreationTimestamp="2026-03-09 18:30:11 +0000 UTC" firstStartedPulling="2026-03-09 18:30:12.55411473 +0000 UTC m=+349.715490586" lastFinishedPulling="2026-03-09 18:30:15.221632357 +0000 UTC m=+352.383008213" observedRunningTime="2026-03-09 18:30:15.627287307 +0000 UTC m=+352.788663173" watchObservedRunningTime="2026-03-09 18:30:15.629021234 +0000 UTC m=+352.790397090" Mar 09 18:30:18 crc kubenswrapper[4821]: I0309 18:30:18.379807 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:18 crc kubenswrapper[4821]: I0309 18:30:18.381504 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:18 crc kubenswrapper[4821]: I0309 18:30:18.982022 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:18 crc kubenswrapper[4821]: I0309 18:30:18.982134 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:19 crc kubenswrapper[4821]: I0309 18:30:19.047840 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:19 crc kubenswrapper[4821]: I0309 18:30:19.456420 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tc2f5" podUID="f077b409-1e21-4fb0-a973-8c57822d2b94" containerName="registry-server" probeResult="failure" output=< Mar 09 18:30:19 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 18:30:19 crc kubenswrapper[4821]: > Mar 09 18:30:19 crc kubenswrapper[4821]: I0309 18:30:19.666670 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-29tmk" Mar 09 18:30:20 crc kubenswrapper[4821]: I0309 18:30:20.776815 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:20 crc kubenswrapper[4821]: I0309 18:30:20.777206 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:20 crc kubenswrapper[4821]: I0309 18:30:20.830587 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:21 crc kubenswrapper[4821]: I0309 18:30:21.370972 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:21 crc kubenswrapper[4821]: I0309 18:30:21.371074 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:21 crc kubenswrapper[4821]: I0309 18:30:21.429919 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:21 crc kubenswrapper[4821]: I0309 18:30:21.666009 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 18:30:21 crc kubenswrapper[4821]: I0309 18:30:21.679473 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pchlh" Mar 09 18:30:28 crc kubenswrapper[4821]: I0309 18:30:28.445996 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:28 crc kubenswrapper[4821]: I0309 18:30:28.492722 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tc2f5" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.374148 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sr524"] Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.375741 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.400874 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sr524"] Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526663 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526715 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a08d45c-ab67-496b-80e7-9f630d75e6cf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526749 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-bound-sa-token\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526763 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a08d45c-ab67-496b-80e7-9f630d75e6cf-trusted-ca\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526792 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-registry-tls\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526806 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-kube-api-access-jbfrg\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.526825 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a08d45c-ab67-496b-80e7-9f630d75e6cf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.527147 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a08d45c-ab67-496b-80e7-9f630d75e6cf-registry-certificates\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.556099 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.628025 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-registry-tls\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.629778 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-kube-api-access-jbfrg\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.629822 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a08d45c-ab67-496b-80e7-9f630d75e6cf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.629912 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a08d45c-ab67-496b-80e7-9f630d75e6cf-registry-certificates\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.629963 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a08d45c-ab67-496b-80e7-9f630d75e6cf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.630019 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-bound-sa-token\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.630044 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a08d45c-ab67-496b-80e7-9f630d75e6cf-trusted-ca\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.631080 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8a08d45c-ab67-496b-80e7-9f630d75e6cf-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.631593 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8a08d45c-ab67-496b-80e7-9f630d75e6cf-registry-certificates\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.632796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a08d45c-ab67-496b-80e7-9f630d75e6cf-trusted-ca\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.635513 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-registry-tls\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.638016 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8a08d45c-ab67-496b-80e7-9f630d75e6cf-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.652539 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbfrg\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-kube-api-access-jbfrg\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.661152 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8a08d45c-ab67-496b-80e7-9f630d75e6cf-bound-sa-token\") pod \"image-registry-66df7c8f76-sr524\" (UID: \"8a08d45c-ab67-496b-80e7-9f630d75e6cf\") " pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.702565 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:31 crc kubenswrapper[4821]: I0309 18:30:31.925995 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sr524"] Mar 09 18:30:32 crc kubenswrapper[4821]: I0309 18:30:32.717581 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" event={"ID":"8a08d45c-ab67-496b-80e7-9f630d75e6cf","Type":"ContainerStarted","Data":"1ac5c7993d5a522cc6815a41d774d1efba451f63dd056542575a3334b6a2ebd4"} Mar 09 18:30:32 crc kubenswrapper[4821]: I0309 18:30:32.717961 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" event={"ID":"8a08d45c-ab67-496b-80e7-9f630d75e6cf","Type":"ContainerStarted","Data":"93ea7c6cd3069679ade755bf7290b5aee2769dae761b71d2eab0a5902b30c377"} Mar 09 18:30:32 crc kubenswrapper[4821]: I0309 18:30:32.717986 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:32 crc kubenswrapper[4821]: I0309 18:30:32.738860 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" podStartSLOduration=1.738842586 podStartE2EDuration="1.738842586s" podCreationTimestamp="2026-03-09 18:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:30:32.735750791 +0000 UTC m=+369.897126657" watchObservedRunningTime="2026-03-09 18:30:32.738842586 +0000 UTC m=+369.900218442" Mar 09 18:30:51 crc kubenswrapper[4821]: I0309 18:30:51.715849 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sr524" Mar 09 18:30:51 crc kubenswrapper[4821]: I0309 18:30:51.771782 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xbxp5"] Mar 09 18:31:16 crc kubenswrapper[4821]: I0309 18:31:16.813585 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" podUID="e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" containerName="registry" containerID="cri-o://c46f3f486c116c0b4c8b13755c275a70b7a2dc5214375a6103fea84fa1ac5d04" gracePeriod=30 Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.031191 4821 generic.go:334] "Generic (PLEG): container finished" podID="e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" containerID="c46f3f486c116c0b4c8b13755c275a70b7a2dc5214375a6103fea84fa1ac5d04" exitCode=0 Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.031241 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" event={"ID":"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8","Type":"ContainerDied","Data":"c46f3f486c116c0b4c8b13755c275a70b7a2dc5214375a6103fea84fa1ac5d04"} Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.121651 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.317879 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-certificates\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318257 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318407 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd5ls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-kube-api-access-gd5ls\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318462 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-tls\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318506 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-installation-pull-secrets\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318549 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-ca-trust-extracted\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318596 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-bound-sa-token\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.318670 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-trusted-ca\") pod \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\" (UID: \"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8\") " Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.320034 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.320132 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.328189 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.334503 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-kube-api-access-gd5ls" (OuterVolumeSpecName: "kube-api-access-gd5ls") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "kube-api-access-gd5ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.334574 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.334601 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.335571 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.357537 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" (UID: "e0a42c85-7fab-45fc-b0b0-df2ae5082cd8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.420900 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.420983 4821 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.421015 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd5ls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-kube-api-access-gd5ls\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.421042 4821 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.421070 4821 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.421093 4821 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:17 crc kubenswrapper[4821]: I0309 18:31:17.421116 4821 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 09 18:31:18 crc kubenswrapper[4821]: I0309 18:31:18.038235 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" event={"ID":"e0a42c85-7fab-45fc-b0b0-df2ae5082cd8","Type":"ContainerDied","Data":"fb3579d4693ea2f74a975e02218e71df16cb1642b5f7b227f44c6549cb013536"} Mar 09 18:31:18 crc kubenswrapper[4821]: I0309 18:31:18.038341 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xbxp5" Mar 09 18:31:18 crc kubenswrapper[4821]: I0309 18:31:18.038635 4821 scope.go:117] "RemoveContainer" containerID="c46f3f486c116c0b4c8b13755c275a70b7a2dc5214375a6103fea84fa1ac5d04" Mar 09 18:31:18 crc kubenswrapper[4821]: I0309 18:31:18.073538 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xbxp5"] Mar 09 18:31:18 crc kubenswrapper[4821]: I0309 18:31:18.080277 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xbxp5"] Mar 09 18:31:19 crc kubenswrapper[4821]: I0309 18:31:19.566624 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" path="/var/lib/kubelet/pods/e0a42c85-7fab-45fc-b0b0-df2ae5082cd8/volumes" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.143037 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551352-njjrt"] Mar 09 18:32:00 crc kubenswrapper[4821]: E0309 18:32:00.143883 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" containerName="registry" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.143899 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" containerName="registry" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.144014 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a42c85-7fab-45fc-b0b0-df2ae5082cd8" containerName="registry" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.144456 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.148006 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.148021 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.148076 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.156313 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551352-njjrt"] Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.289079 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4kcw\" (UniqueName: \"kubernetes.io/projected/8a893b80-b63c-4639-ab4f-974bc226128a-kube-api-access-j4kcw\") pod \"auto-csr-approver-29551352-njjrt\" (UID: \"8a893b80-b63c-4639-ab4f-974bc226128a\") " pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.390384 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4kcw\" (UniqueName: \"kubernetes.io/projected/8a893b80-b63c-4639-ab4f-974bc226128a-kube-api-access-j4kcw\") pod \"auto-csr-approver-29551352-njjrt\" (UID: \"8a893b80-b63c-4639-ab4f-974bc226128a\") " pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.408962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4kcw\" (UniqueName: \"kubernetes.io/projected/8a893b80-b63c-4639-ab4f-974bc226128a-kube-api-access-j4kcw\") pod \"auto-csr-approver-29551352-njjrt\" (UID: \"8a893b80-b63c-4639-ab4f-974bc226128a\") " pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.470823 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:00 crc kubenswrapper[4821]: I0309 18:32:00.684465 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551352-njjrt"] Mar 09 18:32:00 crc kubenswrapper[4821]: W0309 18:32:00.695157 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a893b80_b63c_4639_ab4f_974bc226128a.slice/crio-3e68b5f026a17473c45a17af7d374c29b199f175523b7a5301f5e1412802768e WatchSource:0}: Error finding container 3e68b5f026a17473c45a17af7d374c29b199f175523b7a5301f5e1412802768e: Status 404 returned error can't find the container with id 3e68b5f026a17473c45a17af7d374c29b199f175523b7a5301f5e1412802768e Mar 09 18:32:01 crc kubenswrapper[4821]: I0309 18:32:01.321438 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551352-njjrt" event={"ID":"8a893b80-b63c-4639-ab4f-974bc226128a","Type":"ContainerStarted","Data":"3e68b5f026a17473c45a17af7d374c29b199f175523b7a5301f5e1412802768e"} Mar 09 18:32:02 crc kubenswrapper[4821]: I0309 18:32:02.331015 4821 generic.go:334] "Generic (PLEG): container finished" podID="8a893b80-b63c-4639-ab4f-974bc226128a" containerID="389cace8aa6f9581c4fea45a07227e794aed004e9bf5f478020daa28f9f29b78" exitCode=0 Mar 09 18:32:02 crc kubenswrapper[4821]: I0309 18:32:02.331112 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551352-njjrt" event={"ID":"8a893b80-b63c-4639-ab4f-974bc226128a","Type":"ContainerDied","Data":"389cace8aa6f9581c4fea45a07227e794aed004e9bf5f478020daa28f9f29b78"} Mar 09 18:32:03 crc kubenswrapper[4821]: I0309 18:32:03.647846 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:03 crc kubenswrapper[4821]: I0309 18:32:03.835472 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4kcw\" (UniqueName: \"kubernetes.io/projected/8a893b80-b63c-4639-ab4f-974bc226128a-kube-api-access-j4kcw\") pod \"8a893b80-b63c-4639-ab4f-974bc226128a\" (UID: \"8a893b80-b63c-4639-ab4f-974bc226128a\") " Mar 09 18:32:03 crc kubenswrapper[4821]: I0309 18:32:03.843191 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a893b80-b63c-4639-ab4f-974bc226128a-kube-api-access-j4kcw" (OuterVolumeSpecName: "kube-api-access-j4kcw") pod "8a893b80-b63c-4639-ab4f-974bc226128a" (UID: "8a893b80-b63c-4639-ab4f-974bc226128a"). InnerVolumeSpecName "kube-api-access-j4kcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:32:03 crc kubenswrapper[4821]: I0309 18:32:03.937170 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4kcw\" (UniqueName: \"kubernetes.io/projected/8a893b80-b63c-4639-ab4f-974bc226128a-kube-api-access-j4kcw\") on node \"crc\" DevicePath \"\"" Mar 09 18:32:04 crc kubenswrapper[4821]: I0309 18:32:04.350204 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551352-njjrt" event={"ID":"8a893b80-b63c-4639-ab4f-974bc226128a","Type":"ContainerDied","Data":"3e68b5f026a17473c45a17af7d374c29b199f175523b7a5301f5e1412802768e"} Mar 09 18:32:04 crc kubenswrapper[4821]: I0309 18:32:04.350265 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e68b5f026a17473c45a17af7d374c29b199f175523b7a5301f5e1412802768e" Mar 09 18:32:04 crc kubenswrapper[4821]: I0309 18:32:04.350299 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551352-njjrt" Mar 09 18:32:04 crc kubenswrapper[4821]: I0309 18:32:04.718265 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551346-phdwt"] Mar 09 18:32:04 crc kubenswrapper[4821]: I0309 18:32:04.723368 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551346-phdwt"] Mar 09 18:32:05 crc kubenswrapper[4821]: I0309 18:32:05.563386 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60628f60-1633-4b77-a457-762d204bab20" path="/var/lib/kubelet/pods/60628f60-1633-4b77-a457-762d204bab20/volumes" Mar 09 18:32:29 crc kubenswrapper[4821]: I0309 18:32:29.913569 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:32:29 crc kubenswrapper[4821]: I0309 18:32:29.914174 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:32:59 crc kubenswrapper[4821]: I0309 18:32:59.914155 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:32:59 crc kubenswrapper[4821]: I0309 18:32:59.915555 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:33:29 crc kubenswrapper[4821]: I0309 18:33:29.913330 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:33:29 crc kubenswrapper[4821]: I0309 18:33:29.913836 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:33:29 crc kubenswrapper[4821]: I0309 18:33:29.913907 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:33:29 crc kubenswrapper[4821]: I0309 18:33:29.914452 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23cc64d2d10a8b69113d207c0a3d0a0de2d2f613ac820eaa318a413143f856a4"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:33:29 crc kubenswrapper[4821]: I0309 18:33:29.914498 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://23cc64d2d10a8b69113d207c0a3d0a0de2d2f613ac820eaa318a413143f856a4" gracePeriod=600 Mar 09 18:33:30 crc kubenswrapper[4821]: I0309 18:33:30.931855 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="23cc64d2d10a8b69113d207c0a3d0a0de2d2f613ac820eaa318a413143f856a4" exitCode=0 Mar 09 18:33:30 crc kubenswrapper[4821]: I0309 18:33:30.931990 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"23cc64d2d10a8b69113d207c0a3d0a0de2d2f613ac820eaa318a413143f856a4"} Mar 09 18:33:30 crc kubenswrapper[4821]: I0309 18:33:30.932608 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"de40e97a09448ee0292ed23dff4aa5fe956489128d71db5d125451ab26a025aa"} Mar 09 18:33:30 crc kubenswrapper[4821]: I0309 18:33:30.932630 4821 scope.go:117] "RemoveContainer" containerID="d0a04c20f17e06f03335ee69aaf048806a74c7b9a2ff5530ba49284e7a12d777" Mar 09 18:33:39 crc kubenswrapper[4821]: I0309 18:33:39.776445 4821 scope.go:117] "RemoveContainer" containerID="5267639d1b40b8d0a47829649ed4cc773eed9710e4dca98c1041946c1f8334ae" Mar 09 18:33:39 crc kubenswrapper[4821]: I0309 18:33:39.798795 4821 scope.go:117] "RemoveContainer" containerID="240788ce8a383b28c4bc5e8a7d15974180644722157d8bc64efe50a8238166af" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.149585 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551354-vvfrb"] Mar 09 18:34:00 crc kubenswrapper[4821]: E0309 18:34:00.150447 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a893b80-b63c-4639-ab4f-974bc226128a" containerName="oc" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.150482 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a893b80-b63c-4639-ab4f-974bc226128a" containerName="oc" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.150595 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a893b80-b63c-4639-ab4f-974bc226128a" containerName="oc" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.151035 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.153055 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.153444 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.154959 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.158029 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551354-vvfrb"] Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.322213 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mm4\" (UniqueName: \"kubernetes.io/projected/1240e366-1e5e-4d5e-9a11-fc281f0fd93b-kube-api-access-47mm4\") pod \"auto-csr-approver-29551354-vvfrb\" (UID: \"1240e366-1e5e-4d5e-9a11-fc281f0fd93b\") " pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.423762 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mm4\" (UniqueName: \"kubernetes.io/projected/1240e366-1e5e-4d5e-9a11-fc281f0fd93b-kube-api-access-47mm4\") pod \"auto-csr-approver-29551354-vvfrb\" (UID: \"1240e366-1e5e-4d5e-9a11-fc281f0fd93b\") " pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.452684 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mm4\" (UniqueName: \"kubernetes.io/projected/1240e366-1e5e-4d5e-9a11-fc281f0fd93b-kube-api-access-47mm4\") pod \"auto-csr-approver-29551354-vvfrb\" (UID: \"1240e366-1e5e-4d5e-9a11-fc281f0fd93b\") " pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.472543 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.678743 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551354-vvfrb"] Mar 09 18:34:00 crc kubenswrapper[4821]: I0309 18:34:00.686031 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 18:34:01 crc kubenswrapper[4821]: I0309 18:34:01.141761 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" event={"ID":"1240e366-1e5e-4d5e-9a11-fc281f0fd93b","Type":"ContainerStarted","Data":"6879a78946846b3c72a4aeb8aaaba2d4bfe6034cdd78562095107d20b8944041"} Mar 09 18:34:02 crc kubenswrapper[4821]: I0309 18:34:02.148618 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" event={"ID":"1240e366-1e5e-4d5e-9a11-fc281f0fd93b","Type":"ContainerStarted","Data":"52a88d7631b887c0b250aa189cc34d8b10ac13a902d0f37eab607b8efd014210"} Mar 09 18:34:02 crc kubenswrapper[4821]: I0309 18:34:02.169370 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" podStartSLOduration=1.021085451 podStartE2EDuration="2.169348293s" podCreationTimestamp="2026-03-09 18:34:00 +0000 UTC" firstStartedPulling="2026-03-09 18:34:00.685777713 +0000 UTC m=+577.847153569" lastFinishedPulling="2026-03-09 18:34:01.834040555 +0000 UTC m=+578.995416411" observedRunningTime="2026-03-09 18:34:02.166500516 +0000 UTC m=+579.327876382" watchObservedRunningTime="2026-03-09 18:34:02.169348293 +0000 UTC m=+579.330724179" Mar 09 18:34:03 crc kubenswrapper[4821]: I0309 18:34:03.156264 4821 generic.go:334] "Generic (PLEG): container finished" podID="1240e366-1e5e-4d5e-9a11-fc281f0fd93b" containerID="52a88d7631b887c0b250aa189cc34d8b10ac13a902d0f37eab607b8efd014210" exitCode=0 Mar 09 18:34:03 crc kubenswrapper[4821]: I0309 18:34:03.156386 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" event={"ID":"1240e366-1e5e-4d5e-9a11-fc281f0fd93b","Type":"ContainerDied","Data":"52a88d7631b887c0b250aa189cc34d8b10ac13a902d0f37eab607b8efd014210"} Mar 09 18:34:04 crc kubenswrapper[4821]: I0309 18:34:04.374719 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:04 crc kubenswrapper[4821]: I0309 18:34:04.380542 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47mm4\" (UniqueName: \"kubernetes.io/projected/1240e366-1e5e-4d5e-9a11-fc281f0fd93b-kube-api-access-47mm4\") pod \"1240e366-1e5e-4d5e-9a11-fc281f0fd93b\" (UID: \"1240e366-1e5e-4d5e-9a11-fc281f0fd93b\") " Mar 09 18:34:04 crc kubenswrapper[4821]: I0309 18:34:04.390548 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1240e366-1e5e-4d5e-9a11-fc281f0fd93b-kube-api-access-47mm4" (OuterVolumeSpecName: "kube-api-access-47mm4") pod "1240e366-1e5e-4d5e-9a11-fc281f0fd93b" (UID: "1240e366-1e5e-4d5e-9a11-fc281f0fd93b"). InnerVolumeSpecName "kube-api-access-47mm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:34:04 crc kubenswrapper[4821]: I0309 18:34:04.481960 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47mm4\" (UniqueName: \"kubernetes.io/projected/1240e366-1e5e-4d5e-9a11-fc281f0fd93b-kube-api-access-47mm4\") on node \"crc\" DevicePath \"\"" Mar 09 18:34:05 crc kubenswrapper[4821]: I0309 18:34:05.172924 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" event={"ID":"1240e366-1e5e-4d5e-9a11-fc281f0fd93b","Type":"ContainerDied","Data":"6879a78946846b3c72a4aeb8aaaba2d4bfe6034cdd78562095107d20b8944041"} Mar 09 18:34:05 crc kubenswrapper[4821]: I0309 18:34:05.172975 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551354-vvfrb" Mar 09 18:34:05 crc kubenswrapper[4821]: I0309 18:34:05.173003 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6879a78946846b3c72a4aeb8aaaba2d4bfe6034cdd78562095107d20b8944041" Mar 09 18:34:05 crc kubenswrapper[4821]: I0309 18:34:05.240410 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551348-txx6v"] Mar 09 18:34:05 crc kubenswrapper[4821]: I0309 18:34:05.247113 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551348-txx6v"] Mar 09 18:34:05 crc kubenswrapper[4821]: I0309 18:34:05.563295 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d058c9b7-152c-49b8-9bbb-0681920dd243" path="/var/lib/kubelet/pods/d058c9b7-152c-49b8-9bbb-0681920dd243/volumes" Mar 09 18:34:39 crc kubenswrapper[4821]: I0309 18:34:39.840935 4821 scope.go:117] "RemoveContainer" containerID="899e6f76e7aa4815ed1827f6927d363d5354ccc7f85517dc20d197ef68ddf545" Mar 09 18:34:39 crc kubenswrapper[4821]: I0309 18:34:39.882362 4821 scope.go:117] "RemoveContainer" containerID="d4679044f8495b36e6b6667a3a6958878fb44b68f17fcefb12eaaf574ef27150" Mar 09 18:34:39 crc kubenswrapper[4821]: I0309 18:34:39.913361 4821 scope.go:117] "RemoveContainer" containerID="3d2d2cf01882e65b7138c1461d311da211a3bf653eef4fce4832d9727245273c" Mar 09 18:34:39 crc kubenswrapper[4821]: I0309 18:34:39.932101 4821 scope.go:117] "RemoveContainer" containerID="fb2aceb9d0a4a7e4213e2a4ddee561b3254153c25e68804624d692a0232af6a4" Mar 09 18:34:39 crc kubenswrapper[4821]: I0309 18:34:39.964744 4821 scope.go:117] "RemoveContainer" containerID="ab331e63d918c4fef53d485f4669a8304b76f49bf19c262bf753483e0089b2b5" Mar 09 18:34:40 crc kubenswrapper[4821]: I0309 18:34:40.014618 4821 scope.go:117] "RemoveContainer" containerID="7c81ea312ce1a396b6aabfd8967fc75f7cf75fba41b32cbc232d3bae8c28df51" Mar 09 18:34:40 crc kubenswrapper[4821]: I0309 18:34:40.030048 4821 scope.go:117] "RemoveContainer" containerID="5416fb9adace19d77676dd6d3d578f796a5ec056337d1863a5cbf731739e138b" Mar 09 18:34:40 crc kubenswrapper[4821]: I0309 18:34:40.045502 4821 scope.go:117] "RemoveContainer" containerID="a959fd964c95d575bc8de56dfa58e33cec163f220afab1d11923747c61ac1025" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.065929 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6"] Mar 09 18:35:41 crc kubenswrapper[4821]: E0309 18:35:41.066682 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1240e366-1e5e-4d5e-9a11-fc281f0fd93b" containerName="oc" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.066697 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="1240e366-1e5e-4d5e-9a11-fc281f0fd93b" containerName="oc" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.066788 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="1240e366-1e5e-4d5e-9a11-fc281f0fd93b" containerName="oc" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.067459 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.069722 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.079467 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6"] Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.138087 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.138128 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqr5v\" (UniqueName: \"kubernetes.io/projected/9a7665a2-307a-4f7f-939a-b93afc455415-kube-api-access-hqr5v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.138152 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.238915 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.239259 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqr5v\" (UniqueName: \"kubernetes.io/projected/9a7665a2-307a-4f7f-939a-b93afc455415-kube-api-access-hqr5v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.239293 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.239384 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.239714 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.258274 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqr5v\" (UniqueName: \"kubernetes.io/projected/9a7665a2-307a-4f7f-939a-b93afc455415-kube-api-access-hqr5v\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.423301 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.620219 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6"] Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.772110 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" event={"ID":"9a7665a2-307a-4f7f-939a-b93afc455415","Type":"ContainerStarted","Data":"7954eb6ca5b05318c0fa317994b59b52552cbf973c2bfdeb84f9cf9faa129af4"} Mar 09 18:35:41 crc kubenswrapper[4821]: I0309 18:35:41.772165 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" event={"ID":"9a7665a2-307a-4f7f-939a-b93afc455415","Type":"ContainerStarted","Data":"663f72b6b55f374b2751788692a7271437f283cea42cdba5222e6b42a3c41f2e"} Mar 09 18:35:42 crc kubenswrapper[4821]: I0309 18:35:42.778715 4821 generic.go:334] "Generic (PLEG): container finished" podID="9a7665a2-307a-4f7f-939a-b93afc455415" containerID="7954eb6ca5b05318c0fa317994b59b52552cbf973c2bfdeb84f9cf9faa129af4" exitCode=0 Mar 09 18:35:42 crc kubenswrapper[4821]: I0309 18:35:42.779076 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" event={"ID":"9a7665a2-307a-4f7f-939a-b93afc455415","Type":"ContainerDied","Data":"7954eb6ca5b05318c0fa317994b59b52552cbf973c2bfdeb84f9cf9faa129af4"} Mar 09 18:35:44 crc kubenswrapper[4821]: I0309 18:35:44.793360 4821 generic.go:334] "Generic (PLEG): container finished" podID="9a7665a2-307a-4f7f-939a-b93afc455415" containerID="4054f81ea8ebd04a5656c0858de9ba5adbdd26c42d69a9a2951e19bd839ba236" exitCode=0 Mar 09 18:35:44 crc kubenswrapper[4821]: I0309 18:35:44.793528 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" event={"ID":"9a7665a2-307a-4f7f-939a-b93afc455415","Type":"ContainerDied","Data":"4054f81ea8ebd04a5656c0858de9ba5adbdd26c42d69a9a2951e19bd839ba236"} Mar 09 18:35:45 crc kubenswrapper[4821]: I0309 18:35:45.801944 4821 generic.go:334] "Generic (PLEG): container finished" podID="9a7665a2-307a-4f7f-939a-b93afc455415" containerID="4bbd0ce433c10343c3f016681b367e95dc44dcf1975ad4ddd23574fd646927ca" exitCode=0 Mar 09 18:35:45 crc kubenswrapper[4821]: I0309 18:35:45.802003 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" event={"ID":"9a7665a2-307a-4f7f-939a-b93afc455415","Type":"ContainerDied","Data":"4bbd0ce433c10343c3f016681b367e95dc44dcf1975ad4ddd23574fd646927ca"} Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.019117 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.212535 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-bundle\") pod \"9a7665a2-307a-4f7f-939a-b93afc455415\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.212600 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqr5v\" (UniqueName: \"kubernetes.io/projected/9a7665a2-307a-4f7f-939a-b93afc455415-kube-api-access-hqr5v\") pod \"9a7665a2-307a-4f7f-939a-b93afc455415\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.212740 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-util\") pod \"9a7665a2-307a-4f7f-939a-b93afc455415\" (UID: \"9a7665a2-307a-4f7f-939a-b93afc455415\") " Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.214939 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-bundle" (OuterVolumeSpecName: "bundle") pod "9a7665a2-307a-4f7f-939a-b93afc455415" (UID: "9a7665a2-307a-4f7f-939a-b93afc455415"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.218572 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7665a2-307a-4f7f-939a-b93afc455415-kube-api-access-hqr5v" (OuterVolumeSpecName: "kube-api-access-hqr5v") pod "9a7665a2-307a-4f7f-939a-b93afc455415" (UID: "9a7665a2-307a-4f7f-939a-b93afc455415"). InnerVolumeSpecName "kube-api-access-hqr5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.314220 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.314264 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqr5v\" (UniqueName: \"kubernetes.io/projected/9a7665a2-307a-4f7f-939a-b93afc455415-kube-api-access-hqr5v\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.379634 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-util" (OuterVolumeSpecName: "util") pod "9a7665a2-307a-4f7f-939a-b93afc455415" (UID: "9a7665a2-307a-4f7f-939a-b93afc455415"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.415720 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9a7665a2-307a-4f7f-939a-b93afc455415-util\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.817207 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" event={"ID":"9a7665a2-307a-4f7f-939a-b93afc455415","Type":"ContainerDied","Data":"663f72b6b55f374b2751788692a7271437f283cea42cdba5222e6b42a3c41f2e"} Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.817262 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663f72b6b55f374b2751788692a7271437f283cea42cdba5222e6b42a3c41f2e" Mar 09 18:35:47 crc kubenswrapper[4821]: I0309 18:35:47.817301 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6" Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.901415 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfdsp"] Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902201 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-controller" containerID="cri-o://279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902256 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="nbdb" containerID="cri-o://e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902330 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-node" containerID="cri-o://ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902339 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="sbdb" containerID="cri-o://5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902367 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="northd" containerID="cri-o://30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902386 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.902417 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-acl-logging" containerID="cri-o://ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" gracePeriod=30 Mar 09 18:35:51 crc kubenswrapper[4821]: I0309 18:35:51.956452 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovnkube-controller" containerID="cri-o://71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" gracePeriod=30 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.374943 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfdsp_40e368ce-5f0d-4208-a1de-67d4ab591f82/ovnkube-controller/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.383520 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfdsp_40e368ce-5f0d-4208-a1de-67d4ab591f82/ovn-acl-logging/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.384048 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfdsp_40e368ce-5f0d-4208-a1de-67d4ab591f82/ovn-controller/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.384401 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479329 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-var-lib-openvswitch\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479598 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-systemd-units\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479480 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479665 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-kubelet\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479740 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-netns\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479737 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479814 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-openvswitch\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479850 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-netd\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479869 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479880 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-ovn-kubernetes\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479868 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479894 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479917 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9kmf\" (UniqueName: \"kubernetes.io/projected/40e368ce-5f0d-4208-a1de-67d4ab591f82-kube-api-access-c9kmf\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479931 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479945 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-node-log\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479963 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-script-lib\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479979 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-bin\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.479986 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-node-log" (OuterVolumeSpecName: "node-log") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480003 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-env-overrides\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480018 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-var-lib-cni-networks-ovn-kubernetes\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480036 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovn-node-metrics-cert\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-etc-openvswitch\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480071 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-slash\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480090 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-systemd\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480111 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-log-socket\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480140 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-ovn\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480160 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-config\") pod \"40e368ce-5f0d-4208-a1de-67d4ab591f82\" (UID: \"40e368ce-5f0d-4208-a1de-67d4ab591f82\") " Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480365 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480414 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480435 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480456 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480466 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-slash" (OuterVolumeSpecName: "host-slash") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480486 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480510 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-log-socket" (OuterVolumeSpecName: "log-socket") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480615 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480668 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480764 4821 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-node-log\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480819 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480866 4821 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480912 4821 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480962 4821 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481010 4821 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-slash\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481058 4821 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481106 4821 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481152 4821 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481199 4821 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481250 4821 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481299 4821 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.481362 4821 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.480995 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.488853 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e368ce-5f0d-4208-a1de-67d4ab591f82-kube-api-access-c9kmf" (OuterVolumeSpecName: "kube-api-access-c9kmf") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "kube-api-access-c9kmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.490130 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.495640 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "40e368ce-5f0d-4208-a1de-67d4ab591f82" (UID: "40e368ce-5f0d-4208-a1de-67d4ab591f82"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531700 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hl2fd"] Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531891 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-controller" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531903 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-controller" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531916 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="nbdb" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531922 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="nbdb" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531931 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kubecfg-setup" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531936 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kubecfg-setup" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531944 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovnkube-controller" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531950 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovnkube-controller" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531958 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-ovn-metrics" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531964 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-ovn-metrics" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531974 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-acl-logging" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531980 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-acl-logging" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.531988 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="sbdb" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.531994 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="sbdb" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.532002 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="northd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532008 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="northd" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.532014 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="util" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532019 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="util" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.532028 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-node" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532033 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-node" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.532041 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="extract" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532046 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="extract" Mar 09 18:35:52 crc kubenswrapper[4821]: E0309 18:35:52.532056 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="pull" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532061 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="pull" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532148 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7665a2-307a-4f7f-939a-b93afc455415" containerName="extract" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532156 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-controller" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532165 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovn-acl-logging" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532173 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="northd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532182 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="nbdb" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532190 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-node" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532199 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="sbdb" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532207 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="kube-rbac-proxy-ovn-metrics" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.532215 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerName="ovnkube-controller" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.533770 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582076 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9kmf\" (UniqueName: \"kubernetes.io/projected/40e368ce-5f0d-4208-a1de-67d4ab591f82-kube-api-access-c9kmf\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582110 4821 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582124 4821 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582137 4821 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582168 4821 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-log-socket\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582176 4821 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/40e368ce-5f0d-4208-a1de-67d4ab591f82-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.582185 4821 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/40e368ce-5f0d-4208-a1de-67d4ab591f82-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.682954 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-cni-netd\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683251 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-log-socket\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683270 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683287 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-systemd\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683307 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-env-overrides\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683335 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovnkube-script-lib\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683364 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-node-log\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683384 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqjbx\" (UniqueName: \"kubernetes.io/projected/20e65f74-ecab-4bad-b2ea-09c0fac9406d-kube-api-access-mqjbx\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683408 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-etc-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-run-netns\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683453 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-run-ovn-kubernetes\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683467 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-ovn\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683482 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-slash\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683497 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-cni-bin\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683512 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovn-node-metrics-cert\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683527 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-systemd-units\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683542 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-kubelet\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683559 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683577 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-var-lib-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.683599 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovnkube-config\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784598 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-run-ovn-kubernetes\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784641 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-ovn\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784661 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-systemd-units\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784676 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-slash\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784688 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-cni-bin\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784701 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovn-node-metrics-cert\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784717 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-kubelet\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784735 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784756 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-var-lib-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784778 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovnkube-config\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784801 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-cni-netd\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784820 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-log-socket\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784835 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784849 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-systemd\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784864 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-env-overrides\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784881 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovnkube-script-lib\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784899 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-node-log\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784914 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqjbx\" (UniqueName: \"kubernetes.io/projected/20e65f74-ecab-4bad-b2ea-09c0fac9406d-kube-api-access-mqjbx\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784933 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-etc-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.784954 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-run-netns\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785038 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-run-netns\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785074 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-run-ovn-kubernetes\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785096 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-ovn\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785116 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-systemd-units\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785138 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-slash\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785157 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-cni-bin\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785706 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785789 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-var-lib-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785829 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-kubelet\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785894 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-run-systemd\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786247 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovnkube-config\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786289 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-cni-netd\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786313 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-log-socket\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.785856 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786487 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-env-overrides\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786519 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovnkube-script-lib\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786535 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-etc-openvswitch\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.786561 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/20e65f74-ecab-4bad-b2ea-09c0fac9406d-node-log\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.788778 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e65f74-ecab-4bad-b2ea-09c0fac9406d-ovn-node-metrics-cert\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.828903 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqjbx\" (UniqueName: \"kubernetes.io/projected/20e65f74-ecab-4bad-b2ea-09c0fac9406d-kube-api-access-mqjbx\") pod \"ovnkube-node-hl2fd\" (UID: \"20e65f74-ecab-4bad-b2ea-09c0fac9406d\") " pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.840237 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfdsp_40e368ce-5f0d-4208-a1de-67d4ab591f82/ovnkube-controller/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842073 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfdsp_40e368ce-5f0d-4208-a1de-67d4ab591f82/ovn-acl-logging/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842478 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bfdsp_40e368ce-5f0d-4208-a1de-67d4ab591f82/ovn-controller/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842742 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" exitCode=2 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842765 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" exitCode=0 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842773 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" exitCode=0 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842781 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" exitCode=0 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842789 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" exitCode=0 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842795 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" exitCode=0 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842802 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" exitCode=143 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842810 4821 generic.go:334] "Generic (PLEG): container finished" podID="40e368ce-5f0d-4208-a1de-67d4ab591f82" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" exitCode=143 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842856 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842927 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842942 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842953 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842967 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842978 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.842993 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843006 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843013 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843023 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843025 4821 scope.go:117] "RemoveContainer" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843032 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843115 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843124 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843131 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843138 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843144 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843151 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843157 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843164 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843173 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843187 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843195 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843202 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843209 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843217 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843223 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843229 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843235 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843242 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843250 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" event={"ID":"40e368ce-5f0d-4208-a1de-67d4ab591f82","Type":"ContainerDied","Data":"35ad5bac11a67a673410b088c66b16d64e5b64b55a43387b1c7814843428250f"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843260 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843271 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843279 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843289 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843294 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843301 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843308 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843343 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843350 4821 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.843632 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfdsp" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.844517 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lw2hk_1a255bc9-2034-4a34-8240-f1fd42e808bd/kube-multus/0.log" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.844562 4821 generic.go:334] "Generic (PLEG): container finished" podID="1a255bc9-2034-4a34-8240-f1fd42e808bd" containerID="79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5" exitCode=2 Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.844588 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lw2hk" event={"ID":"1a255bc9-2034-4a34-8240-f1fd42e808bd","Type":"ContainerDied","Data":"79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5"} Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.844956 4821 scope.go:117] "RemoveContainer" containerID="79f6723e2866800eed4c31077a7d2546d460878f6ddec4829d142929a98f03b5" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.845515 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.874041 4821 scope.go:117] "RemoveContainer" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.890824 4821 scope.go:117] "RemoveContainer" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.918110 4821 scope.go:117] "RemoveContainer" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.955244 4821 scope.go:117] "RemoveContainer" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.978411 4821 scope.go:117] "RemoveContainer" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:52 crc kubenswrapper[4821]: I0309 18:35:52.999644 4821 scope.go:117] "RemoveContainer" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.028790 4821 scope.go:117] "RemoveContainer" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.038747 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfdsp"] Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.040532 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfdsp"] Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.074215 4821 scope.go:117] "RemoveContainer" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.108723 4821 scope.go:117] "RemoveContainer" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.109173 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": container with ID starting with 71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b not found: ID does not exist" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.109206 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} err="failed to get container status \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": rpc error: code = NotFound desc = could not find container \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": container with ID starting with 71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.109235 4821 scope.go:117] "RemoveContainer" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.112805 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": container with ID starting with 5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b not found: ID does not exist" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.112842 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} err="failed to get container status \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": rpc error: code = NotFound desc = could not find container \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": container with ID starting with 5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.112866 4821 scope.go:117] "RemoveContainer" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.113708 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": container with ID starting with e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3 not found: ID does not exist" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.113745 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} err="failed to get container status \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": rpc error: code = NotFound desc = could not find container \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": container with ID starting with e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.113770 4821 scope.go:117] "RemoveContainer" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.117892 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": container with ID starting with 30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0 not found: ID does not exist" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.117926 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} err="failed to get container status \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": rpc error: code = NotFound desc = could not find container \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": container with ID starting with 30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.117948 4821 scope.go:117] "RemoveContainer" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.121686 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": container with ID starting with 9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58 not found: ID does not exist" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.121727 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} err="failed to get container status \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": rpc error: code = NotFound desc = could not find container \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": container with ID starting with 9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.121752 4821 scope.go:117] "RemoveContainer" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.125720 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": container with ID starting with ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd not found: ID does not exist" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.125762 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} err="failed to get container status \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": rpc error: code = NotFound desc = could not find container \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": container with ID starting with ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.125789 4821 scope.go:117] "RemoveContainer" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.130709 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": container with ID starting with ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60 not found: ID does not exist" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.130751 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} err="failed to get container status \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": rpc error: code = NotFound desc = could not find container \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": container with ID starting with ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.130777 4821 scope.go:117] "RemoveContainer" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.134689 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": container with ID starting with 279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56 not found: ID does not exist" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.134730 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} err="failed to get container status \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": rpc error: code = NotFound desc = could not find container \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": container with ID starting with 279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.134754 4821 scope.go:117] "RemoveContainer" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" Mar 09 18:35:53 crc kubenswrapper[4821]: E0309 18:35:53.138901 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": container with ID starting with 20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e not found: ID does not exist" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.138941 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} err="failed to get container status \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": rpc error: code = NotFound desc = could not find container \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": container with ID starting with 20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.138966 4821 scope.go:117] "RemoveContainer" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.142263 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} err="failed to get container status \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": rpc error: code = NotFound desc = could not find container \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": container with ID starting with 71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.142303 4821 scope.go:117] "RemoveContainer" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.146666 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} err="failed to get container status \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": rpc error: code = NotFound desc = could not find container \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": container with ID starting with 5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.146706 4821 scope.go:117] "RemoveContainer" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.156546 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} err="failed to get container status \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": rpc error: code = NotFound desc = could not find container \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": container with ID starting with e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.156597 4821 scope.go:117] "RemoveContainer" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.157047 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} err="failed to get container status \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": rpc error: code = NotFound desc = could not find container \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": container with ID starting with 30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.157067 4821 scope.go:117] "RemoveContainer" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.159869 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} err="failed to get container status \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": rpc error: code = NotFound desc = could not find container \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": container with ID starting with 9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.159911 4821 scope.go:117] "RemoveContainer" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.167777 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} err="failed to get container status \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": rpc error: code = NotFound desc = could not find container \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": container with ID starting with ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.167819 4821 scope.go:117] "RemoveContainer" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.171651 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} err="failed to get container status \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": rpc error: code = NotFound desc = could not find container \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": container with ID starting with ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.171688 4821 scope.go:117] "RemoveContainer" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.175607 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} err="failed to get container status \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": rpc error: code = NotFound desc = could not find container \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": container with ID starting with 279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.175638 4821 scope.go:117] "RemoveContainer" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.177186 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} err="failed to get container status \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": rpc error: code = NotFound desc = could not find container \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": container with ID starting with 20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.177216 4821 scope.go:117] "RemoveContainer" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.177564 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} err="failed to get container status \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": rpc error: code = NotFound desc = could not find container \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": container with ID starting with 71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.177598 4821 scope.go:117] "RemoveContainer" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.177799 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} err="failed to get container status \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": rpc error: code = NotFound desc = could not find container \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": container with ID starting with 5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.177817 4821 scope.go:117] "RemoveContainer" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.178038 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} err="failed to get container status \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": rpc error: code = NotFound desc = could not find container \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": container with ID starting with e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.178086 4821 scope.go:117] "RemoveContainer" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.182016 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} err="failed to get container status \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": rpc error: code = NotFound desc = could not find container \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": container with ID starting with 30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.182076 4821 scope.go:117] "RemoveContainer" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.182568 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} err="failed to get container status \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": rpc error: code = NotFound desc = could not find container \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": container with ID starting with 9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.182593 4821 scope.go:117] "RemoveContainer" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.183582 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} err="failed to get container status \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": rpc error: code = NotFound desc = could not find container \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": container with ID starting with ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.183608 4821 scope.go:117] "RemoveContainer" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.183847 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} err="failed to get container status \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": rpc error: code = NotFound desc = could not find container \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": container with ID starting with ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.183870 4821 scope.go:117] "RemoveContainer" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.184075 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} err="failed to get container status \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": rpc error: code = NotFound desc = could not find container \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": container with ID starting with 279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.184092 4821 scope.go:117] "RemoveContainer" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.184238 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} err="failed to get container status \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": rpc error: code = NotFound desc = could not find container \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": container with ID starting with 20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.184255 4821 scope.go:117] "RemoveContainer" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.184487 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} err="failed to get container status \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": rpc error: code = NotFound desc = could not find container \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": container with ID starting with 71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.184506 4821 scope.go:117] "RemoveContainer" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.185569 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} err="failed to get container status \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": rpc error: code = NotFound desc = could not find container \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": container with ID starting with 5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.185591 4821 scope.go:117] "RemoveContainer" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186062 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} err="failed to get container status \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": rpc error: code = NotFound desc = could not find container \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": container with ID starting with e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186083 4821 scope.go:117] "RemoveContainer" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186271 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} err="failed to get container status \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": rpc error: code = NotFound desc = could not find container \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": container with ID starting with 30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186287 4821 scope.go:117] "RemoveContainer" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186546 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} err="failed to get container status \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": rpc error: code = NotFound desc = could not find container \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": container with ID starting with 9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186568 4821 scope.go:117] "RemoveContainer" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186791 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} err="failed to get container status \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": rpc error: code = NotFound desc = could not find container \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": container with ID starting with ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.186813 4821 scope.go:117] "RemoveContainer" containerID="ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187032 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60"} err="failed to get container status \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": rpc error: code = NotFound desc = could not find container \"ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60\": container with ID starting with ea4b545a78227c9c417277cfca602a8f3eb3cbaf07ecf5dc7e63a929002e3c60 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187049 4821 scope.go:117] "RemoveContainer" containerID="279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187212 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56"} err="failed to get container status \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": rpc error: code = NotFound desc = could not find container \"279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56\": container with ID starting with 279cdd1d6e3c8f599973312feec821775972886507788699cdda7c93f42ffb56 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187229 4821 scope.go:117] "RemoveContainer" containerID="20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187402 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e"} err="failed to get container status \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": rpc error: code = NotFound desc = could not find container \"20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e\": container with ID starting with 20d8ff057988af84aec7c1a1ab5e994a2e4e24a6e41ec94dfe473e323330bf1e not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187418 4821 scope.go:117] "RemoveContainer" containerID="71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187668 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b"} err="failed to get container status \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": rpc error: code = NotFound desc = could not find container \"71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b\": container with ID starting with 71d0daa4ccdc131ca3499749e7530e507fb5202e9a1fa73052faea4e96c3431b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187688 4821 scope.go:117] "RemoveContainer" containerID="5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187849 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b"} err="failed to get container status \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": rpc error: code = NotFound desc = could not find container \"5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b\": container with ID starting with 5ad089c04613b85f0f80a60d1e1b6f5658b8e9ccebff599a6079b084aac79e7b not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.187866 4821 scope.go:117] "RemoveContainer" containerID="e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188010 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3"} err="failed to get container status \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": rpc error: code = NotFound desc = could not find container \"e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3\": container with ID starting with e0faddfcf11613bfe2ac7edabe36aacbe344a21d13f8a448ccb872ac875d05a3 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188025 4821 scope.go:117] "RemoveContainer" containerID="30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188306 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0"} err="failed to get container status \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": rpc error: code = NotFound desc = could not find container \"30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0\": container with ID starting with 30e86b3ab6e8b736b05eb07aea7f3bf98d48cfebfd0d269b5ea1162f83ee9fe0 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188366 4821 scope.go:117] "RemoveContainer" containerID="9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188618 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58"} err="failed to get container status \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": rpc error: code = NotFound desc = could not find container \"9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58\": container with ID starting with 9688777ec83c9651d5c3fbe189b19eb85eb81b1fd1de3fc60c72290310a06e58 not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188666 4821 scope.go:117] "RemoveContainer" containerID="ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.188942 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd"} err="failed to get container status \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": rpc error: code = NotFound desc = could not find container \"ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd\": container with ID starting with ce2e2c15c050268f76ae0ec7eddc428e44439e88e315f3492efe37e398b885cd not found: ID does not exist" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.558308 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e368ce-5f0d-4208-a1de-67d4ab591f82" path="/var/lib/kubelet/pods/40e368ce-5f0d-4208-a1de-67d4ab591f82/volumes" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.852108 4821 generic.go:334] "Generic (PLEG): container finished" podID="20e65f74-ecab-4bad-b2ea-09c0fac9406d" containerID="69d2423b4ffdf85a9190225f4d7891c3b0f70ac53a844c2e8ed293570aa9db2f" exitCode=0 Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.852165 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerDied","Data":"69d2423b4ffdf85a9190225f4d7891c3b0f70ac53a844c2e8ed293570aa9db2f"} Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.852192 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"cdd5109981487a639d41653cb3d38e3fd02afdfc64372dda0980783a2710213f"} Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.856079 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lw2hk_1a255bc9-2034-4a34-8240-f1fd42e808bd/kube-multus/0.log" Mar 09 18:35:53 crc kubenswrapper[4821]: I0309 18:35:53.856143 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lw2hk" event={"ID":"1a255bc9-2034-4a34-8240-f1fd42e808bd","Type":"ContainerStarted","Data":"66da0f74cfc0c14f9691b9a0817371b334d6aea2757edeb1c409a3627d57b8b6"} Mar 09 18:35:54 crc kubenswrapper[4821]: I0309 18:35:54.862565 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"6859bc06192656b1b4e69713bbc4a10aed772316105e157c0a89510850b23c94"} Mar 09 18:35:54 crc kubenswrapper[4821]: I0309 18:35:54.863823 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"900de39abb906d8d6def89104d1e1073dcf6919ff91d9ad69442d86caa4c7d29"} Mar 09 18:35:54 crc kubenswrapper[4821]: I0309 18:35:54.863890 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"2daed2df70b7755768396553f36e67875eaf5c8c3460e2a13aed879658c3a5e3"} Mar 09 18:35:54 crc kubenswrapper[4821]: I0309 18:35:54.863969 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"a7b7477a8e1247e29b2ecac362c1e42201fc9d30204fcc154e9f1ff9aadcbc51"} Mar 09 18:35:54 crc kubenswrapper[4821]: I0309 18:35:54.864034 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"d910a2cca5a3d0407d5b753fc0c10ca24bbe759fb59268c9f8a7399cb9dd5598"} Mar 09 18:35:54 crc kubenswrapper[4821]: I0309 18:35:54.864087 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"7fb9df650605cc2b39aa1a2ad1fdd951b1ba35a9872f2ce12b5f73402a3fa2c1"} Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.424074 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5"] Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.424747 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.426623 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.427301 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-g4zdf" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.427934 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.524720 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzpsr\" (UniqueName: \"kubernetes.io/projected/85b873a4-96da-407a-b4af-30ba3aa97519-kube-api-access-rzpsr\") pod \"obo-prometheus-operator-68bc856cb9-cqfq5\" (UID: \"85b873a4-96da-407a-b4af-30ba3aa97519\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.559407 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv"] Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.560287 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd"] Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.560490 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.561082 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.564081 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-czpb2" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.564289 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.626309 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzpsr\" (UniqueName: \"kubernetes.io/projected/85b873a4-96da-407a-b4af-30ba3aa97519-kube-api-access-rzpsr\") pod \"obo-prometheus-operator-68bc856cb9-cqfq5\" (UID: \"85b873a4-96da-407a-b4af-30ba3aa97519\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.659113 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzpsr\" (UniqueName: \"kubernetes.io/projected/85b873a4-96da-407a-b4af-30ba3aa97519-kube-api-access-rzpsr\") pod \"obo-prometheus-operator-68bc856cb9-cqfq5\" (UID: \"85b873a4-96da-407a-b4af-30ba3aa97519\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.660205 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-j8xx6"] Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.661298 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.665151 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zcrgz" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.665411 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.727725 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c294d09f-af0a-400e-90ea-1097080fb096-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-44lfv\" (UID: \"c294d09f-af0a-400e-90ea-1097080fb096\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.727778 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c294d09f-af0a-400e-90ea-1097080fb096-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-44lfv\" (UID: \"c294d09f-af0a-400e-90ea-1097080fb096\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.727828 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd\" (UID: \"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.727854 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd\" (UID: \"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.741692 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.765833 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(1dd3bbcd016f7e305935400c5054ade2372d6b9f98928eb608042caf074b1b83): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.765922 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(1dd3bbcd016f7e305935400c5054ade2372d6b9f98928eb608042caf074b1b83): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.765950 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(1dd3bbcd016f7e305935400c5054ade2372d6b9f98928eb608042caf074b1b83): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.766011 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators(85b873a4-96da-407a-b4af-30ba3aa97519)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators(85b873a4-96da-407a-b4af-30ba3aa97519)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(1dd3bbcd016f7e305935400c5054ade2372d6b9f98928eb608042caf074b1b83): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" podUID="85b873a4-96da-407a-b4af-30ba3aa97519" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.776042 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p679h"] Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.776728 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.778373 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qbsct" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.828748 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c294d09f-af0a-400e-90ea-1097080fb096-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-44lfv\" (UID: \"c294d09f-af0a-400e-90ea-1097080fb096\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.828799 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c294d09f-af0a-400e-90ea-1097080fb096-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-44lfv\" (UID: \"c294d09f-af0a-400e-90ea-1097080fb096\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.828834 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd\" (UID: \"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.828853 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd\" (UID: \"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.828888 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52b04c6b-da35-4f2a-a5f2-06370a59da78-observability-operator-tls\") pod \"observability-operator-59bdc8b94-j8xx6\" (UID: \"52b04c6b-da35-4f2a-a5f2-06370a59da78\") " pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.828914 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dqf\" (UniqueName: \"kubernetes.io/projected/52b04c6b-da35-4f2a-a5f2-06370a59da78-kube-api-access-w8dqf\") pod \"observability-operator-59bdc8b94-j8xx6\" (UID: \"52b04c6b-da35-4f2a-a5f2-06370a59da78\") " pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.831829 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c294d09f-af0a-400e-90ea-1097080fb096-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-44lfv\" (UID: \"c294d09f-af0a-400e-90ea-1097080fb096\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.831930 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c294d09f-af0a-400e-90ea-1097080fb096-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-44lfv\" (UID: \"c294d09f-af0a-400e-90ea-1097080fb096\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.832499 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd\" (UID: \"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.832639 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd\" (UID: \"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.876558 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.884805 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.900722 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(39d89625368d27ea1773c64425ccc1c64221163f322e58539b7f814e45d16515): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.900875 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(39d89625368d27ea1773c64425ccc1c64221163f322e58539b7f814e45d16515): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.900955 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(39d89625368d27ea1773c64425ccc1c64221163f322e58539b7f814e45d16515): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.901049 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators(c294d09f-af0a-400e-90ea-1097080fb096)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators(c294d09f-af0a-400e-90ea-1097080fb096)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(39d89625368d27ea1773c64425ccc1c64221163f322e58539b7f814e45d16515): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" podUID="c294d09f-af0a-400e-90ea-1097080fb096" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.911701 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(324b4b8038fbe6bbf55da3a5abe27734a21682f47c4e90aceaf98cc14629fd3a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.911778 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(324b4b8038fbe6bbf55da3a5abe27734a21682f47c4e90aceaf98cc14629fd3a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.911807 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(324b4b8038fbe6bbf55da3a5abe27734a21682f47c4e90aceaf98cc14629fd3a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:35:55 crc kubenswrapper[4821]: E0309 18:35:55.911865 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators(80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators(80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(324b4b8038fbe6bbf55da3a5abe27734a21682f47c4e90aceaf98cc14629fd3a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" podUID="80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.929975 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/94baa4ca-adf1-461f-a309-a1639aafd708-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p679h\" (UID: \"94baa4ca-adf1-461f-a309-a1639aafd708\") " pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.930101 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvn9\" (UniqueName: \"kubernetes.io/projected/94baa4ca-adf1-461f-a309-a1639aafd708-kube-api-access-kbvn9\") pod \"perses-operator-5bf474d74f-p679h\" (UID: \"94baa4ca-adf1-461f-a309-a1639aafd708\") " pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.930202 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52b04c6b-da35-4f2a-a5f2-06370a59da78-observability-operator-tls\") pod \"observability-operator-59bdc8b94-j8xx6\" (UID: \"52b04c6b-da35-4f2a-a5f2-06370a59da78\") " pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.930279 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dqf\" (UniqueName: \"kubernetes.io/projected/52b04c6b-da35-4f2a-a5f2-06370a59da78-kube-api-access-w8dqf\") pod \"observability-operator-59bdc8b94-j8xx6\" (UID: \"52b04c6b-da35-4f2a-a5f2-06370a59da78\") " pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.933612 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/52b04c6b-da35-4f2a-a5f2-06370a59da78-observability-operator-tls\") pod \"observability-operator-59bdc8b94-j8xx6\" (UID: \"52b04c6b-da35-4f2a-a5f2-06370a59da78\") " pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.948907 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dqf\" (UniqueName: \"kubernetes.io/projected/52b04c6b-da35-4f2a-a5f2-06370a59da78-kube-api-access-w8dqf\") pod \"observability-operator-59bdc8b94-j8xx6\" (UID: \"52b04c6b-da35-4f2a-a5f2-06370a59da78\") " pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:55 crc kubenswrapper[4821]: I0309 18:35:55.993582 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.012113 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(494aba331ab4b2d5a27f8d0292af5378cb43131ea6b805056f9dfbf9a0d1ca0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.012302 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(494aba331ab4b2d5a27f8d0292af5378cb43131ea6b805056f9dfbf9a0d1ca0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.012490 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(494aba331ab4b2d5a27f8d0292af5378cb43131ea6b805056f9dfbf9a0d1ca0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.012698 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-j8xx6_openshift-operators(52b04c6b-da35-4f2a-a5f2-06370a59da78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-j8xx6_openshift-operators(52b04c6b-da35-4f2a-a5f2-06370a59da78)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(494aba331ab4b2d5a27f8d0292af5378cb43131ea6b805056f9dfbf9a0d1ca0e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" podUID="52b04c6b-da35-4f2a-a5f2-06370a59da78" Mar 09 18:35:56 crc kubenswrapper[4821]: I0309 18:35:56.032089 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvn9\" (UniqueName: \"kubernetes.io/projected/94baa4ca-adf1-461f-a309-a1639aafd708-kube-api-access-kbvn9\") pod \"perses-operator-5bf474d74f-p679h\" (UID: \"94baa4ca-adf1-461f-a309-a1639aafd708\") " pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: I0309 18:35:56.032803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/94baa4ca-adf1-461f-a309-a1639aafd708-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p679h\" (UID: \"94baa4ca-adf1-461f-a309-a1639aafd708\") " pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: I0309 18:35:56.033859 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/94baa4ca-adf1-461f-a309-a1639aafd708-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p679h\" (UID: \"94baa4ca-adf1-461f-a309-a1639aafd708\") " pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: I0309 18:35:56.056937 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvn9\" (UniqueName: \"kubernetes.io/projected/94baa4ca-adf1-461f-a309-a1639aafd708-kube-api-access-kbvn9\") pod \"perses-operator-5bf474d74f-p679h\" (UID: \"94baa4ca-adf1-461f-a309-a1639aafd708\") " pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: I0309 18:35:56.096962 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.119983 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(991443aa29d8b589a8e0b23483773b9d0d3d5795c45c23e6982a2d8cdb994d13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.120121 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(991443aa29d8b589a8e0b23483773b9d0d3d5795c45c23e6982a2d8cdb994d13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.120207 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(991443aa29d8b589a8e0b23483773b9d0d3d5795c45c23e6982a2d8cdb994d13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:35:56 crc kubenswrapper[4821]: E0309 18:35:56.120364 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-p679h_openshift-operators(94baa4ca-adf1-461f-a309-a1639aafd708)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-p679h_openshift-operators(94baa4ca-adf1-461f-a309-a1639aafd708)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(991443aa29d8b589a8e0b23483773b9d0d3d5795c45c23e6982a2d8cdb994d13): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-p679h" podUID="94baa4ca-adf1-461f-a309-a1639aafd708" Mar 09 18:35:56 crc kubenswrapper[4821]: I0309 18:35:56.880689 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"9e3d3b7335795edcfae8aa92ee055070f1b1ab96d6220fe3aaf35a6254d9fdb8"} Mar 09 18:35:59 crc kubenswrapper[4821]: I0309 18:35:59.899900 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" event={"ID":"20e65f74-ecab-4bad-b2ea-09c0fac9406d","Type":"ContainerStarted","Data":"88edfb8019440e62f9c7d13ebd5f28915bdceefea0b5ca16fae0f9dd707b3b75"} Mar 09 18:35:59 crc kubenswrapper[4821]: I0309 18:35:59.900281 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:59 crc kubenswrapper[4821]: I0309 18:35:59.913712 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:35:59 crc kubenswrapper[4821]: I0309 18:35:59.913784 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:35:59 crc kubenswrapper[4821]: I0309 18:35:59.951259 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:35:59 crc kubenswrapper[4821]: I0309 18:35:59.980708 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" podStartSLOduration=7.980685703 podStartE2EDuration="7.980685703s" podCreationTimestamp="2026-03-09 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:35:59.977503607 +0000 UTC m=+697.138879483" watchObservedRunningTime="2026-03-09 18:35:59.980685703 +0000 UTC m=+697.142061559" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.128433 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551356-zfwvf"] Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.129205 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.131381 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.131803 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.132535 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.189261 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbqj\" (UniqueName: \"kubernetes.io/projected/df914183-d942-4bef-91f2-14579dc3290d-kube-api-access-9wbqj\") pod \"auto-csr-approver-29551356-zfwvf\" (UID: \"df914183-d942-4bef-91f2-14579dc3290d\") " pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.290673 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbqj\" (UniqueName: \"kubernetes.io/projected/df914183-d942-4bef-91f2-14579dc3290d-kube-api-access-9wbqj\") pod \"auto-csr-approver-29551356-zfwvf\" (UID: \"df914183-d942-4bef-91f2-14579dc3290d\") " pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.310133 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbqj\" (UniqueName: \"kubernetes.io/projected/df914183-d942-4bef-91f2-14579dc3290d-kube-api-access-9wbqj\") pod \"auto-csr-approver-29551356-zfwvf\" (UID: \"df914183-d942-4bef-91f2-14579dc3290d\") " pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.443798 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.465269 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(16f19590e5a4c77710e956e1a92b430afa05680e80f211bebfc5354c075699e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.465439 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(16f19590e5a4c77710e956e1a92b430afa05680e80f211bebfc5354c075699e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.465512 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(16f19590e5a4c77710e956e1a92b430afa05680e80f211bebfc5354c075699e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.465642 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29551356-zfwvf_openshift-infra(df914183-d942-4bef-91f2-14579dc3290d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29551356-zfwvf_openshift-infra(df914183-d942-4bef-91f2-14579dc3290d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(16f19590e5a4c77710e956e1a92b430afa05680e80f211bebfc5354c075699e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" podUID="df914183-d942-4bef-91f2-14579dc3290d" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.905599 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.905914 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.933650 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.938771 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p679h"] Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.938894 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.939266 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.958516 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551356-zfwvf"] Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.958614 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.958998 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.989452 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5"] Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.989574 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.989925 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.992746 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-j8xx6"] Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.992854 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:00 crc kubenswrapper[4821]: I0309 18:36:00.993271 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.994591 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(6aee5c42b041c59cd4ab77a45081a1ed7bd9efc13ee13ee6f392e243871ac09a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.994630 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(6aee5c42b041c59cd4ab77a45081a1ed7bd9efc13ee13ee6f392e243871ac09a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.994648 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(6aee5c42b041c59cd4ab77a45081a1ed7bd9efc13ee13ee6f392e243871ac09a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:00 crc kubenswrapper[4821]: E0309 18:36:00.994679 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-p679h_openshift-operators(94baa4ca-adf1-461f-a309-a1639aafd708)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-p679h_openshift-operators(94baa4ca-adf1-461f-a309-a1639aafd708)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p679h_openshift-operators_94baa4ca-adf1-461f-a309-a1639aafd708_0(6aee5c42b041c59cd4ab77a45081a1ed7bd9efc13ee13ee6f392e243871ac09a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-p679h" podUID="94baa4ca-adf1-461f-a309-a1639aafd708" Mar 09 18:36:01 crc kubenswrapper[4821]: I0309 18:36:01.005410 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd"] Mar 09 18:36:01 crc kubenswrapper[4821]: I0309 18:36:01.005540 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:36:01 crc kubenswrapper[4821]: I0309 18:36:01.005976 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:36:01 crc kubenswrapper[4821]: I0309 18:36:01.011108 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv"] Mar 09 18:36:01 crc kubenswrapper[4821]: I0309 18:36:01.011250 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:36:01 crc kubenswrapper[4821]: I0309 18:36:01.011937 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.024271 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(5c82fbf800898e9ee208565e5f2031bc1edd5162ef5aba4577ea2a5a4f3ac8fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.024431 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(5c82fbf800898e9ee208565e5f2031bc1edd5162ef5aba4577ea2a5a4f3ac8fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.024487 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(5c82fbf800898e9ee208565e5f2031bc1edd5162ef5aba4577ea2a5a4f3ac8fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.024566 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29551356-zfwvf_openshift-infra(df914183-d942-4bef-91f2-14579dc3290d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29551356-zfwvf_openshift-infra(df914183-d942-4bef-91f2-14579dc3290d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29551356-zfwvf_openshift-infra_df914183-d942-4bef-91f2-14579dc3290d_0(5c82fbf800898e9ee208565e5f2031bc1edd5162ef5aba4577ea2a5a4f3ac8fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" podUID="df914183-d942-4bef-91f2-14579dc3290d" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.037564 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(465a304cee9b2d876c2335f6e160d3c05b02a7c3140a7476486ae6414faa637c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.037634 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(465a304cee9b2d876c2335f6e160d3c05b02a7c3140a7476486ae6414faa637c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.037665 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(465a304cee9b2d876c2335f6e160d3c05b02a7c3140a7476486ae6414faa637c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.037718 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators(85b873a4-96da-407a-b4af-30ba3aa97519)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators(85b873a4-96da-407a-b4af-30ba3aa97519)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-cqfq5_openshift-operators_85b873a4-96da-407a-b4af-30ba3aa97519_0(465a304cee9b2d876c2335f6e160d3c05b02a7c3140a7476486ae6414faa637c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" podUID="85b873a4-96da-407a-b4af-30ba3aa97519" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.044941 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(02a257dd35498c5088e63e5b4c17d567c5d10b34b233bf831fcdaa1e8d8bfea8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.044998 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(02a257dd35498c5088e63e5b4c17d567c5d10b34b233bf831fcdaa1e8d8bfea8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.045020 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(02a257dd35498c5088e63e5b4c17d567c5d10b34b233bf831fcdaa1e8d8bfea8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.045065 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-j8xx6_openshift-operators(52b04c6b-da35-4f2a-a5f2-06370a59da78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-j8xx6_openshift-operators(52b04c6b-da35-4f2a-a5f2-06370a59da78)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-j8xx6_openshift-operators_52b04c6b-da35-4f2a-a5f2-06370a59da78_0(02a257dd35498c5088e63e5b4c17d567c5d10b34b233bf831fcdaa1e8d8bfea8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" podUID="52b04c6b-da35-4f2a-a5f2-06370a59da78" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.050849 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(50bda1ea7b137ae20d595bd62813d07c33a5894ac97f4b08f0e17822f491a60b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.050931 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(50bda1ea7b137ae20d595bd62813d07c33a5894ac97f4b08f0e17822f491a60b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.050969 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(50bda1ea7b137ae20d595bd62813d07c33a5894ac97f4b08f0e17822f491a60b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.051026 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators(c294d09f-af0a-400e-90ea-1097080fb096)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators(c294d09f-af0a-400e-90ea-1097080fb096)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-44lfv_openshift-operators_c294d09f-af0a-400e-90ea-1097080fb096_0(50bda1ea7b137ae20d595bd62813d07c33a5894ac97f4b08f0e17822f491a60b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" podUID="c294d09f-af0a-400e-90ea-1097080fb096" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.059765 4821 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(b8db11c244bfbad6333004a952ffcfac995240a0ccbc1175d5c44c2f8b9de6e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.059842 4821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(b8db11c244bfbad6333004a952ffcfac995240a0ccbc1175d5c44c2f8b9de6e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.059877 4821 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(b8db11c244bfbad6333004a952ffcfac995240a0ccbc1175d5c44c2f8b9de6e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:36:01 crc kubenswrapper[4821]: E0309 18:36:01.059944 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators(80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators(80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_openshift-operators_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8_0(b8db11c244bfbad6333004a952ffcfac995240a0ccbc1175d5c44c2f8b9de6e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" podUID="80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8" Mar 09 18:36:11 crc kubenswrapper[4821]: I0309 18:36:11.550681 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:11 crc kubenswrapper[4821]: I0309 18:36:11.551255 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:11 crc kubenswrapper[4821]: I0309 18:36:11.780551 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-j8xx6"] Mar 09 18:36:11 crc kubenswrapper[4821]: W0309 18:36:11.789514 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52b04c6b_da35_4f2a_a5f2_06370a59da78.slice/crio-b7a9597e2bf1d027253c3d9f377d64684e78239f557b55e6146096ef90a6820f WatchSource:0}: Error finding container b7a9597e2bf1d027253c3d9f377d64684e78239f557b55e6146096ef90a6820f: Status 404 returned error can't find the container with id b7a9597e2bf1d027253c3d9f377d64684e78239f557b55e6146096ef90a6820f Mar 09 18:36:11 crc kubenswrapper[4821]: I0309 18:36:11.977788 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" event={"ID":"52b04c6b-da35-4f2a-a5f2-06370a59da78","Type":"ContainerStarted","Data":"b7a9597e2bf1d027253c3d9f377d64684e78239f557b55e6146096ef90a6820f"} Mar 09 18:36:13 crc kubenswrapper[4821]: I0309 18:36:13.553825 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:13 crc kubenswrapper[4821]: I0309 18:36:13.556366 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:13 crc kubenswrapper[4821]: I0309 18:36:13.782518 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551356-zfwvf"] Mar 09 18:36:13 crc kubenswrapper[4821]: I0309 18:36:13.990375 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" event={"ID":"df914183-d942-4bef-91f2-14579dc3290d","Type":"ContainerStarted","Data":"062a9c9311eb9b3d4e0ea71019e896718a5901fbf18d07bded6b85832452ecca"} Mar 09 18:36:14 crc kubenswrapper[4821]: I0309 18:36:14.551540 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:36:14 crc kubenswrapper[4821]: I0309 18:36:14.551836 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:14 crc kubenswrapper[4821]: I0309 18:36:14.552005 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" Mar 09 18:36:14 crc kubenswrapper[4821]: I0309 18:36:14.552227 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:14 crc kubenswrapper[4821]: I0309 18:36:14.552429 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:36:14 crc kubenswrapper[4821]: I0309 18:36:14.552694 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" Mar 09 18:36:15 crc kubenswrapper[4821]: I0309 18:36:15.567824 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:36:15 crc kubenswrapper[4821]: I0309 18:36:15.579023 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.038671 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" event={"ID":"52b04c6b-da35-4f2a-a5f2-06370a59da78","Type":"ContainerStarted","Data":"03773729b0188a9a39b5e842a9636646288cd91c2b0433aa52d6c8957875c54c"} Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.039230 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.040435 4821 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-j8xx6 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.9:8081/healthz\": dial tcp 10.217.0.9:8081: connect: connection refused" start-of-body= Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.040480 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" podUID="52b04c6b-da35-4f2a-a5f2-06370a59da78" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.9:8081/healthz\": dial tcp 10.217.0.9:8081: connect: connection refused" Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.041032 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" event={"ID":"df914183-d942-4bef-91f2-14579dc3290d","Type":"ContainerStarted","Data":"a38788292adee63f271a571b5894e4e71cf4388d4662128eb679d97c041ff1cf"} Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.063134 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" podStartSLOduration=17.221083869 podStartE2EDuration="24.063110779s" podCreationTimestamp="2026-03-09 18:35:55 +0000 UTC" firstStartedPulling="2026-03-09 18:36:11.791500801 +0000 UTC m=+708.952876657" lastFinishedPulling="2026-03-09 18:36:18.633527711 +0000 UTC m=+715.794903567" observedRunningTime="2026-03-09 18:36:19.057192557 +0000 UTC m=+716.218568433" watchObservedRunningTime="2026-03-09 18:36:19.063110779 +0000 UTC m=+716.224486655" Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.077135 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5"] Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.079644 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" podStartSLOduration=14.229658269 podStartE2EDuration="19.079626499s" podCreationTimestamp="2026-03-09 18:36:00 +0000 UTC" firstStartedPulling="2026-03-09 18:36:13.785050881 +0000 UTC m=+710.946426737" lastFinishedPulling="2026-03-09 18:36:18.635019111 +0000 UTC m=+715.796394967" observedRunningTime="2026-03-09 18:36:19.078748705 +0000 UTC m=+716.240124561" watchObservedRunningTime="2026-03-09 18:36:19.079626499 +0000 UTC m=+716.241002355" Mar 09 18:36:19 crc kubenswrapper[4821]: W0309 18:36:19.082226 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85b873a4_96da_407a_b4af_30ba3aa97519.slice/crio-8708d8a8776cfa5168f7e2f0df8706f933db69855a489e19064c54dc33672fb3 WatchSource:0}: Error finding container 8708d8a8776cfa5168f7e2f0df8706f933db69855a489e19064c54dc33672fb3: Status 404 returned error can't find the container with id 8708d8a8776cfa5168f7e2f0df8706f933db69855a489e19064c54dc33672fb3 Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.086853 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd"] Mar 09 18:36:19 crc kubenswrapper[4821]: W0309 18:36:19.096297 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d114e5_b1d1_496c_a0c1_3eeb8d2f67c8.slice/crio-81bf9b038db7139b886ab80295fd4af532160b56805daaac72e814aee35fc245 WatchSource:0}: Error finding container 81bf9b038db7139b886ab80295fd4af532160b56805daaac72e814aee35fc245: Status 404 returned error can't find the container with id 81bf9b038db7139b886ab80295fd4af532160b56805daaac72e814aee35fc245 Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.149643 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv"] Mar 09 18:36:19 crc kubenswrapper[4821]: W0309 18:36:19.152477 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc294d09f_af0a_400e_90ea_1097080fb096.slice/crio-4788baa476fe93992bb3782422216eae52da9c3dfada0237a26acd3a1f1848de WatchSource:0}: Error finding container 4788baa476fe93992bb3782422216eae52da9c3dfada0237a26acd3a1f1848de: Status 404 returned error can't find the container with id 4788baa476fe93992bb3782422216eae52da9c3dfada0237a26acd3a1f1848de Mar 09 18:36:19 crc kubenswrapper[4821]: I0309 18:36:19.160280 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p679h"] Mar 09 18:36:19 crc kubenswrapper[4821]: W0309 18:36:19.166423 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94baa4ca_adf1_461f_a309_a1639aafd708.slice/crio-a27049c389179b9d3f7784922de150e54fcc3251e29df7acee0e4a7dfa328740 WatchSource:0}: Error finding container a27049c389179b9d3f7784922de150e54fcc3251e29df7acee0e4a7dfa328740: Status 404 returned error can't find the container with id a27049c389179b9d3f7784922de150e54fcc3251e29df7acee0e4a7dfa328740 Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.053412 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-p679h" event={"ID":"94baa4ca-adf1-461f-a309-a1639aafd708","Type":"ContainerStarted","Data":"a27049c389179b9d3f7784922de150e54fcc3251e29df7acee0e4a7dfa328740"} Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.054693 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" event={"ID":"c294d09f-af0a-400e-90ea-1097080fb096","Type":"ContainerStarted","Data":"4788baa476fe93992bb3782422216eae52da9c3dfada0237a26acd3a1f1848de"} Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.055631 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" event={"ID":"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8","Type":"ContainerStarted","Data":"81bf9b038db7139b886ab80295fd4af532160b56805daaac72e814aee35fc245"} Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.056775 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" event={"ID":"85b873a4-96da-407a-b4af-30ba3aa97519","Type":"ContainerStarted","Data":"8708d8a8776cfa5168f7e2f0df8706f933db69855a489e19064c54dc33672fb3"} Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.058380 4821 generic.go:334] "Generic (PLEG): container finished" podID="df914183-d942-4bef-91f2-14579dc3290d" containerID="a38788292adee63f271a571b5894e4e71cf4388d4662128eb679d97c041ff1cf" exitCode=0 Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.058489 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" event={"ID":"df914183-d942-4bef-91f2-14579dc3290d","Type":"ContainerDied","Data":"a38788292adee63f271a571b5894e4e71cf4388d4662128eb679d97c041ff1cf"} Mar 09 18:36:20 crc kubenswrapper[4821]: I0309 18:36:20.060121 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-j8xx6" Mar 09 18:36:22 crc kubenswrapper[4821]: I0309 18:36:22.884230 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hl2fd" Mar 09 18:36:23 crc kubenswrapper[4821]: I0309 18:36:23.696079 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:23 crc kubenswrapper[4821]: I0309 18:36:23.815871 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbqj\" (UniqueName: \"kubernetes.io/projected/df914183-d942-4bef-91f2-14579dc3290d-kube-api-access-9wbqj\") pod \"df914183-d942-4bef-91f2-14579dc3290d\" (UID: \"df914183-d942-4bef-91f2-14579dc3290d\") " Mar 09 18:36:23 crc kubenswrapper[4821]: I0309 18:36:23.822885 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df914183-d942-4bef-91f2-14579dc3290d-kube-api-access-9wbqj" (OuterVolumeSpecName: "kube-api-access-9wbqj") pod "df914183-d942-4bef-91f2-14579dc3290d" (UID: "df914183-d942-4bef-91f2-14579dc3290d"). InnerVolumeSpecName "kube-api-access-9wbqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:36:23 crc kubenswrapper[4821]: I0309 18:36:23.917341 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wbqj\" (UniqueName: \"kubernetes.io/projected/df914183-d942-4bef-91f2-14579dc3290d-kube-api-access-9wbqj\") on node \"crc\" DevicePath \"\"" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.081807 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-p679h" event={"ID":"94baa4ca-adf1-461f-a309-a1639aafd708","Type":"ContainerStarted","Data":"04dcd054cc6df58dcbb8ad06a250b2566de95e0d46fb3caaf20e25b3c59fc00a"} Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.081952 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.083425 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" event={"ID":"c294d09f-af0a-400e-90ea-1097080fb096","Type":"ContainerStarted","Data":"cda15ab8d5b94144983713d84e0f20dc79354aa368740ce96c9fe369dab9da8f"} Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.085253 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" event={"ID":"80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8","Type":"ContainerStarted","Data":"8933a85871ac7e0d65defc0a8895f004f64551bb3bb4d4e3fb09c20cbf1e40de"} Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.086990 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" event={"ID":"85b873a4-96da-407a-b4af-30ba3aa97519","Type":"ContainerStarted","Data":"6291735fe2bae5f0c400d187c54ad751f1266e57e0d18c4011c66a5253af4d5b"} Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.088503 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" event={"ID":"df914183-d942-4bef-91f2-14579dc3290d","Type":"ContainerDied","Data":"062a9c9311eb9b3d4e0ea71019e896718a5901fbf18d07bded6b85832452ecca"} Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.088523 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062a9c9311eb9b3d4e0ea71019e896718a5901fbf18d07bded6b85832452ecca" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.088553 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551356-zfwvf" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.115522 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-p679h" podStartSLOduration=24.550277486 podStartE2EDuration="29.115499125s" podCreationTimestamp="2026-03-09 18:35:55 +0000 UTC" firstStartedPulling="2026-03-09 18:36:19.168463612 +0000 UTC m=+716.329839468" lastFinishedPulling="2026-03-09 18:36:23.733685251 +0000 UTC m=+720.895061107" observedRunningTime="2026-03-09 18:36:24.109995135 +0000 UTC m=+721.271370991" watchObservedRunningTime="2026-03-09 18:36:24.115499125 +0000 UTC m=+721.276874981" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.136720 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-xfmsd" podStartSLOduration=24.533758507 podStartE2EDuration="29.136700364s" podCreationTimestamp="2026-03-09 18:35:55 +0000 UTC" firstStartedPulling="2026-03-09 18:36:19.099011288 +0000 UTC m=+716.260387154" lastFinishedPulling="2026-03-09 18:36:23.701953145 +0000 UTC m=+720.863329011" observedRunningTime="2026-03-09 18:36:24.136453537 +0000 UTC m=+721.297829403" watchObservedRunningTime="2026-03-09 18:36:24.136700364 +0000 UTC m=+721.298076220" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.164236 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cqfq5" podStartSLOduration=24.530453637 podStartE2EDuration="29.164216415s" podCreationTimestamp="2026-03-09 18:35:55 +0000 UTC" firstStartedPulling="2026-03-09 18:36:19.090977629 +0000 UTC m=+716.252353505" lastFinishedPulling="2026-03-09 18:36:23.724740427 +0000 UTC m=+720.886116283" observedRunningTime="2026-03-09 18:36:24.161422059 +0000 UTC m=+721.322797915" watchObservedRunningTime="2026-03-09 18:36:24.164216415 +0000 UTC m=+721.325592271" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.947520 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-559887c586-44lfv" podStartSLOduration=25.401946837 podStartE2EDuration="29.94749991s" podCreationTimestamp="2026-03-09 18:35:55 +0000 UTC" firstStartedPulling="2026-03-09 18:36:19.155737855 +0000 UTC m=+716.317113711" lastFinishedPulling="2026-03-09 18:36:23.701290928 +0000 UTC m=+720.862666784" observedRunningTime="2026-03-09 18:36:24.200013521 +0000 UTC m=+721.361389387" watchObservedRunningTime="2026-03-09 18:36:24.94749991 +0000 UTC m=+722.108875766" Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.980671 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551350-s928p"] Mar 09 18:36:24 crc kubenswrapper[4821]: I0309 18:36:24.988948 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551350-s928p"] Mar 09 18:36:25 crc kubenswrapper[4821]: I0309 18:36:25.562256 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e1c786-8f5d-4b94-b547-73982770d24a" path="/var/lib/kubelet/pods/a1e1c786-8f5d-4b94-b547-73982770d24a/volumes" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.280958 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq"] Mar 09 18:36:29 crc kubenswrapper[4821]: E0309 18:36:29.281681 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df914183-d942-4bef-91f2-14579dc3290d" containerName="oc" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.281704 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="df914183-d942-4bef-91f2-14579dc3290d" containerName="oc" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.281883 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="df914183-d942-4bef-91f2-14579dc3290d" containerName="oc" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.283367 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.285418 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.289654 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq"] Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.290476 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.290547 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6gx7\" (UniqueName: \"kubernetes.io/projected/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-kube-api-access-m6gx7\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.290588 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.391375 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.391433 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6gx7\" (UniqueName: \"kubernetes.io/projected/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-kube-api-access-m6gx7\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.391464 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.392072 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.392110 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.412194 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6gx7\" (UniqueName: \"kubernetes.io/projected/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-kube-api-access-m6gx7\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.601214 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.914104 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:36:29 crc kubenswrapper[4821]: I0309 18:36:29.914450 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:36:30 crc kubenswrapper[4821]: I0309 18:36:30.046156 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq"] Mar 09 18:36:30 crc kubenswrapper[4821]: W0309 18:36:30.052972 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbcce1b_4861_49b4_aed4_aaa992fe1a79.slice/crio-4e8e0ca89bf50648f5ff939b6c37753b20a16fb05f095bf42dc7ccb0fa111a41 WatchSource:0}: Error finding container 4e8e0ca89bf50648f5ff939b6c37753b20a16fb05f095bf42dc7ccb0fa111a41: Status 404 returned error can't find the container with id 4e8e0ca89bf50648f5ff939b6c37753b20a16fb05f095bf42dc7ccb0fa111a41 Mar 09 18:36:30 crc kubenswrapper[4821]: I0309 18:36:30.129362 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" event={"ID":"6dbcce1b-4861-49b4-aed4-aaa992fe1a79","Type":"ContainerStarted","Data":"4e8e0ca89bf50648f5ff939b6c37753b20a16fb05f095bf42dc7ccb0fa111a41"} Mar 09 18:36:31 crc kubenswrapper[4821]: I0309 18:36:31.140830 4821 generic.go:334] "Generic (PLEG): container finished" podID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerID="9152b82e690f16ba9c14db7be09ba23b318d8f7d599334b5c208b8b5417b1e5e" exitCode=0 Mar 09 18:36:31 crc kubenswrapper[4821]: I0309 18:36:31.141162 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" event={"ID":"6dbcce1b-4861-49b4-aed4-aaa992fe1a79","Type":"ContainerDied","Data":"9152b82e690f16ba9c14db7be09ba23b318d8f7d599334b5c208b8b5417b1e5e"} Mar 09 18:36:33 crc kubenswrapper[4821]: I0309 18:36:33.160745 4821 generic.go:334] "Generic (PLEG): container finished" podID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerID="30b4df9e8a60d4c60da9600220e0a9e4e80cc035e6097773bfcc863830c994b1" exitCode=0 Mar 09 18:36:33 crc kubenswrapper[4821]: I0309 18:36:33.160832 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" event={"ID":"6dbcce1b-4861-49b4-aed4-aaa992fe1a79","Type":"ContainerDied","Data":"30b4df9e8a60d4c60da9600220e0a9e4e80cc035e6097773bfcc863830c994b1"} Mar 09 18:36:34 crc kubenswrapper[4821]: I0309 18:36:34.172435 4821 generic.go:334] "Generic (PLEG): container finished" podID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerID="8ca63cf463f5c33365b67879d19a22641a2ca733f2f028bd0d9682a237ebed0b" exitCode=0 Mar 09 18:36:34 crc kubenswrapper[4821]: I0309 18:36:34.172571 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" event={"ID":"6dbcce1b-4861-49b4-aed4-aaa992fe1a79","Type":"ContainerDied","Data":"8ca63cf463f5c33365b67879d19a22641a2ca733f2f028bd0d9682a237ebed0b"} Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.436064 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.578141 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6gx7\" (UniqueName: \"kubernetes.io/projected/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-kube-api-access-m6gx7\") pod \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.578217 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-bundle\") pod \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.578246 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-util\") pod \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\" (UID: \"6dbcce1b-4861-49b4-aed4-aaa992fe1a79\") " Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.579333 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-bundle" (OuterVolumeSpecName: "bundle") pod "6dbcce1b-4861-49b4-aed4-aaa992fe1a79" (UID: "6dbcce1b-4861-49b4-aed4-aaa992fe1a79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.602754 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-kube-api-access-m6gx7" (OuterVolumeSpecName: "kube-api-access-m6gx7") pod "6dbcce1b-4861-49b4-aed4-aaa992fe1a79" (UID: "6dbcce1b-4861-49b4-aed4-aaa992fe1a79"). InnerVolumeSpecName "kube-api-access-m6gx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.679481 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6gx7\" (UniqueName: \"kubernetes.io/projected/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-kube-api-access-m6gx7\") on node \"crc\" DevicePath \"\"" Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.679531 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.959369 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-util" (OuterVolumeSpecName: "util") pod "6dbcce1b-4861-49b4-aed4-aaa992fe1a79" (UID: "6dbcce1b-4861-49b4-aed4-aaa992fe1a79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:36:35 crc kubenswrapper[4821]: I0309 18:36:35.983350 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6dbcce1b-4861-49b4-aed4-aaa992fe1a79-util\") on node \"crc\" DevicePath \"\"" Mar 09 18:36:36 crc kubenswrapper[4821]: I0309 18:36:36.100021 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-p679h" Mar 09 18:36:36 crc kubenswrapper[4821]: I0309 18:36:36.184544 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" event={"ID":"6dbcce1b-4861-49b4-aed4-aaa992fe1a79","Type":"ContainerDied","Data":"4e8e0ca89bf50648f5ff939b6c37753b20a16fb05f095bf42dc7ccb0fa111a41"} Mar 09 18:36:36 crc kubenswrapper[4821]: I0309 18:36:36.184596 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e8e0ca89bf50648f5ff939b6c37753b20a16fb05f095bf42dc7ccb0fa111a41" Mar 09 18:36:36 crc kubenswrapper[4821]: I0309 18:36:36.184672 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq" Mar 09 18:36:40 crc kubenswrapper[4821]: I0309 18:36:40.125931 4821 scope.go:117] "RemoveContainer" containerID="e98ef0f424c86fe19c85cc1e186363df3a218636518e22396d35b5192f9ebd14" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.689544 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv"] Mar 09 18:36:41 crc kubenswrapper[4821]: E0309 18:36:41.689725 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="extract" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.689736 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="extract" Mar 09 18:36:41 crc kubenswrapper[4821]: E0309 18:36:41.689747 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="pull" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.689753 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="pull" Mar 09 18:36:41 crc kubenswrapper[4821]: E0309 18:36:41.689760 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="util" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.689765 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="util" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.689868 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbcce1b-4861-49b4-aed4-aaa992fe1a79" containerName="extract" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.690213 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.694212 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4zb2j" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.694648 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.694863 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.715262 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv"] Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.852458 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpjl9\" (UniqueName: \"kubernetes.io/projected/e720485c-7121-43fb-aa59-e383aad4c545-kube-api-access-gpjl9\") pod \"nmstate-operator-75c5dccd6c-2j8gv\" (UID: \"e720485c-7121-43fb-aa59-e383aad4c545\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.953539 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpjl9\" (UniqueName: \"kubernetes.io/projected/e720485c-7121-43fb-aa59-e383aad4c545-kube-api-access-gpjl9\") pod \"nmstate-operator-75c5dccd6c-2j8gv\" (UID: \"e720485c-7121-43fb-aa59-e383aad4c545\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" Mar 09 18:36:41 crc kubenswrapper[4821]: I0309 18:36:41.978176 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpjl9\" (UniqueName: \"kubernetes.io/projected/e720485c-7121-43fb-aa59-e383aad4c545-kube-api-access-gpjl9\") pod \"nmstate-operator-75c5dccd6c-2j8gv\" (UID: \"e720485c-7121-43fb-aa59-e383aad4c545\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" Mar 09 18:36:42 crc kubenswrapper[4821]: I0309 18:36:42.006782 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" Mar 09 18:36:42 crc kubenswrapper[4821]: I0309 18:36:42.207955 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv"] Mar 09 18:36:43 crc kubenswrapper[4821]: I0309 18:36:43.224465 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" event={"ID":"e720485c-7121-43fb-aa59-e383aad4c545","Type":"ContainerStarted","Data":"060480422c111313822f466422cd33ea1195cefc5890a7c21b383deee6ee5db5"} Mar 09 18:36:45 crc kubenswrapper[4821]: I0309 18:36:45.237793 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" event={"ID":"e720485c-7121-43fb-aa59-e383aad4c545","Type":"ContainerStarted","Data":"ab43a9d2bc1202bb31a0110f37eff8f1e5d155af94dde55eee79699898e4681f"} Mar 09 18:36:45 crc kubenswrapper[4821]: I0309 18:36:45.256600 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2j8gv" podStartSLOduration=1.673073209 podStartE2EDuration="4.256580541s" podCreationTimestamp="2026-03-09 18:36:41 +0000 UTC" firstStartedPulling="2026-03-09 18:36:42.214262054 +0000 UTC m=+739.375637910" lastFinishedPulling="2026-03-09 18:36:44.797769376 +0000 UTC m=+741.959145242" observedRunningTime="2026-03-09 18:36:45.254562086 +0000 UTC m=+742.415937992" watchObservedRunningTime="2026-03-09 18:36:45.256580541 +0000 UTC m=+742.417956407" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.241222 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-g2mff"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.243098 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.246001 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-hnr59"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.246347 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-45wv9" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.246851 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.249090 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.256553 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-g2mff"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.265637 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-hnr59"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.273361 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22qx8\" (UniqueName: \"kubernetes.io/projected/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-kube-api-access-22qx8\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.273433 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5gt9\" (UniqueName: \"kubernetes.io/projected/f003e733-9aab-493c-ad84-3b6ec8bae6ee-kube-api-access-l5gt9\") pod \"nmstate-metrics-69594cc75-g2mff\" (UID: \"f003e733-9aab-493c-ad84-3b6ec8bae6ee\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.273465 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.273562 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-msftq"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.274423 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374335 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-dbus-socket\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374435 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5gt9\" (UniqueName: \"kubernetes.io/projected/f003e733-9aab-493c-ad84-3b6ec8bae6ee-kube-api-access-l5gt9\") pod \"nmstate-metrics-69594cc75-g2mff\" (UID: \"f003e733-9aab-493c-ad84-3b6ec8bae6ee\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374492 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374520 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-ovs-socket\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374612 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhf8\" (UniqueName: \"kubernetes.io/projected/3a2bd74c-644c-4c41-9159-5c8eadc45763-kube-api-access-djhf8\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: E0309 18:36:50.374624 4821 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374644 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22qx8\" (UniqueName: \"kubernetes.io/projected/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-kube-api-access-22qx8\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.374664 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-nmstate-lock\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: E0309 18:36:50.374683 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-tls-key-pair podName:5857c061-39ca-4cdf-a64f-b2c5e60c6a35 nodeName:}" failed. No retries permitted until 2026-03-09 18:36:50.874665781 +0000 UTC m=+748.036041637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-tls-key-pair") pod "nmstate-webhook-786f45cff4-hnr59" (UID: "5857c061-39ca-4cdf-a64f-b2c5e60c6a35") : secret "openshift-nmstate-webhook" not found Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.402493 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22qx8\" (UniqueName: \"kubernetes.io/projected/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-kube-api-access-22qx8\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.402564 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5gt9\" (UniqueName: \"kubernetes.io/projected/f003e733-9aab-493c-ad84-3b6ec8bae6ee-kube-api-access-l5gt9\") pod \"nmstate-metrics-69594cc75-g2mff\" (UID: \"f003e733-9aab-493c-ad84-3b6ec8bae6ee\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.444059 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.444687 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.447125 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.447762 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.453703 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-d8dlb" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.461624 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475410 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/be8695e5-622f-41f2-af2e-bd194fdefeb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475451 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/be8695e5-622f-41f2-af2e-bd194fdefeb9-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475524 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhf8\" (UniqueName: \"kubernetes.io/projected/3a2bd74c-644c-4c41-9159-5c8eadc45763-kube-api-access-djhf8\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475547 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-nmstate-lock\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475848 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-dbus-socket\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475913 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-nmstate-lock\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475945 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kkhw\" (UniqueName: \"kubernetes.io/projected/be8695e5-622f-41f2-af2e-bd194fdefeb9-kube-api-access-6kkhw\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.475981 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-ovs-socket\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.476015 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-ovs-socket\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.476145 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3a2bd74c-644c-4c41-9159-5c8eadc45763-dbus-socket\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.510041 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhf8\" (UniqueName: \"kubernetes.io/projected/3a2bd74c-644c-4c41-9159-5c8eadc45763-kube-api-access-djhf8\") pod \"nmstate-handler-msftq\" (UID: \"3a2bd74c-644c-4c41-9159-5c8eadc45763\") " pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.570766 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.576750 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kkhw\" (UniqueName: \"kubernetes.io/projected/be8695e5-622f-41f2-af2e-bd194fdefeb9-kube-api-access-6kkhw\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.576824 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/be8695e5-622f-41f2-af2e-bd194fdefeb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.576858 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/be8695e5-622f-41f2-af2e-bd194fdefeb9-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: E0309 18:36:50.577077 4821 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 09 18:36:50 crc kubenswrapper[4821]: E0309 18:36:50.577257 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be8695e5-622f-41f2-af2e-bd194fdefeb9-plugin-serving-cert podName:be8695e5-622f-41f2-af2e-bd194fdefeb9 nodeName:}" failed. No retries permitted until 2026-03-09 18:36:51.077235726 +0000 UTC m=+748.238611582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/be8695e5-622f-41f2-af2e-bd194fdefeb9-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-sk2qd" (UID: "be8695e5-622f-41f2-af2e-bd194fdefeb9") : secret "plugin-serving-cert" not found Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.577697 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/be8695e5-622f-41f2-af2e-bd194fdefeb9-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.592617 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.603557 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kkhw\" (UniqueName: \"kubernetes.io/projected/be8695e5-622f-41f2-af2e-bd194fdefeb9-kube-api-access-6kkhw\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.649761 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-584b867db4-vgt5b"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.650426 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680494 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-serving-cert\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680529 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-trusted-ca-bundle\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680547 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-service-ca\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680563 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-console-config\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680596 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-oauth-config\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680613 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8gn\" (UniqueName: \"kubernetes.io/projected/a749c63c-1f04-4955-9a98-fabbf677badc-kube-api-access-xq8gn\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.680635 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-oauth-serving-cert\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.705243 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-584b867db4-vgt5b"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781006 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-oauth-config\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781046 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8gn\" (UniqueName: \"kubernetes.io/projected/a749c63c-1f04-4955-9a98-fabbf677badc-kube-api-access-xq8gn\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781070 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-oauth-serving-cert\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781133 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-serving-cert\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-trusted-ca-bundle\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781189 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-service-ca\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781203 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-console-config\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.781980 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-console-config\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.782693 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-trusted-ca-bundle\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.783084 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-oauth-serving-cert\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.783089 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-service-ca\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.784076 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-g2mff"] Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.785899 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-oauth-config\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.788018 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-serving-cert\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.805178 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8gn\" (UniqueName: \"kubernetes.io/projected/a749c63c-1f04-4955-9a98-fabbf677badc-kube-api-access-xq8gn\") pod \"console-584b867db4-vgt5b\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.881940 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.885705 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5857c061-39ca-4cdf-a64f-b2c5e60c6a35-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hnr59\" (UID: \"5857c061-39ca-4cdf-a64f-b2c5e60c6a35\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:50 crc kubenswrapper[4821]: I0309 18:36:50.962753 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.084120 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/be8695e5-622f-41f2-af2e-bd194fdefeb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.088894 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/be8695e5-622f-41f2-af2e-bd194fdefeb9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-sk2qd\" (UID: \"be8695e5-622f-41f2-af2e-bd194fdefeb9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.180568 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.214601 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-584b867db4-vgt5b"] Mar 09 18:36:51 crc kubenswrapper[4821]: W0309 18:36:51.220933 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda749c63c_1f04_4955_9a98_fabbf677badc.slice/crio-cccee2036e254ea8851aacfea71a77754083dedad4b62e2ff18b1a5439176372 WatchSource:0}: Error finding container cccee2036e254ea8851aacfea71a77754083dedad4b62e2ff18b1a5439176372: Status 404 returned error can't find the container with id cccee2036e254ea8851aacfea71a77754083dedad4b62e2ff18b1a5439176372 Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.285060 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-msftq" event={"ID":"3a2bd74c-644c-4c41-9159-5c8eadc45763","Type":"ContainerStarted","Data":"d195fae232e5e7489722a8e2df82b1a7289d21c3aa5db47983eae4b48401a855"} Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.289786 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" event={"ID":"f003e733-9aab-493c-ad84-3b6ec8bae6ee","Type":"ContainerStarted","Data":"358eaf96245cad765bf2b16db3674f4a07557b6cc8a9aad227db214c053c8478"} Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.293475 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-584b867db4-vgt5b" event={"ID":"a749c63c-1f04-4955-9a98-fabbf677badc","Type":"ContainerStarted","Data":"cccee2036e254ea8851aacfea71a77754083dedad4b62e2ff18b1a5439176372"} Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.371567 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.578987 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd"] Mar 09 18:36:51 crc kubenswrapper[4821]: W0309 18:36:51.581448 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe8695e5_622f_41f2_af2e_bd194fdefeb9.slice/crio-987951f577ff0920ce1856aebb631a3a806a19fd86ced9045ff60fd3ef043f78 WatchSource:0}: Error finding container 987951f577ff0920ce1856aebb631a3a806a19fd86ced9045ff60fd3ef043f78: Status 404 returned error can't find the container with id 987951f577ff0920ce1856aebb631a3a806a19fd86ced9045ff60fd3ef043f78 Mar 09 18:36:51 crc kubenswrapper[4821]: I0309 18:36:51.606033 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-hnr59"] Mar 09 18:36:52 crc kubenswrapper[4821]: I0309 18:36:52.302421 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" event={"ID":"be8695e5-622f-41f2-af2e-bd194fdefeb9","Type":"ContainerStarted","Data":"987951f577ff0920ce1856aebb631a3a806a19fd86ced9045ff60fd3ef043f78"} Mar 09 18:36:52 crc kubenswrapper[4821]: I0309 18:36:52.303895 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" event={"ID":"5857c061-39ca-4cdf-a64f-b2c5e60c6a35","Type":"ContainerStarted","Data":"076928560c55a6cab7ac814097e878653d79f1ccf2fd4ed26d0494189affdb51"} Mar 09 18:36:52 crc kubenswrapper[4821]: I0309 18:36:52.306413 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-584b867db4-vgt5b" event={"ID":"a749c63c-1f04-4955-9a98-fabbf677badc","Type":"ContainerStarted","Data":"a58088a31a02ae9a84fcbb76e3efaafda09fcf01ee7b543d815bcfe25bfe5708"} Mar 09 18:36:52 crc kubenswrapper[4821]: I0309 18:36:52.330565 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-584b867db4-vgt5b" podStartSLOduration=2.330544312 podStartE2EDuration="2.330544312s" podCreationTimestamp="2026-03-09 18:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:36:52.325219767 +0000 UTC m=+749.486595623" watchObservedRunningTime="2026-03-09 18:36:52.330544312 +0000 UTC m=+749.491920158" Mar 09 18:36:54 crc kubenswrapper[4821]: I0309 18:36:54.319541 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" event={"ID":"f003e733-9aab-493c-ad84-3b6ec8bae6ee","Type":"ContainerStarted","Data":"9ce57687c191a194a462f1ca2b5a87ae2b8f0f6cfb3327a39f9524c24c2576ee"} Mar 09 18:36:54 crc kubenswrapper[4821]: I0309 18:36:54.320777 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" event={"ID":"5857c061-39ca-4cdf-a64f-b2c5e60c6a35","Type":"ContainerStarted","Data":"5324182d7642348c87ea3eb788aefcd85ffd9c90a71ae39eea5fcd15167a78e3"} Mar 09 18:36:54 crc kubenswrapper[4821]: I0309 18:36:54.320894 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:36:54 crc kubenswrapper[4821]: I0309 18:36:54.325071 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-msftq" event={"ID":"3a2bd74c-644c-4c41-9159-5c8eadc45763","Type":"ContainerStarted","Data":"57d5ba48ccf7a19d4fc11a1fc2d0bfc94537d5d0030c72af9ceb72ca8013d29a"} Mar 09 18:36:54 crc kubenswrapper[4821]: I0309 18:36:54.334768 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" podStartSLOduration=2.079826374 podStartE2EDuration="4.334750463s" podCreationTimestamp="2026-03-09 18:36:50 +0000 UTC" firstStartedPulling="2026-03-09 18:36:51.615242121 +0000 UTC m=+748.776617977" lastFinishedPulling="2026-03-09 18:36:53.87016619 +0000 UTC m=+751.031542066" observedRunningTime="2026-03-09 18:36:54.333680753 +0000 UTC m=+751.495056609" watchObservedRunningTime="2026-03-09 18:36:54.334750463 +0000 UTC m=+751.496126319" Mar 09 18:36:55 crc kubenswrapper[4821]: I0309 18:36:55.332248 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" event={"ID":"be8695e5-622f-41f2-af2e-bd194fdefeb9","Type":"ContainerStarted","Data":"9e120a1a89ffc04e1eb5d16002e919943333f0780dc9462ad1d5f06ee5f1a388"} Mar 09 18:36:55 crc kubenswrapper[4821]: I0309 18:36:55.333729 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:36:55 crc kubenswrapper[4821]: I0309 18:36:55.351025 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-msftq" podStartSLOduration=2.103688264 podStartE2EDuration="5.350994113s" podCreationTimestamp="2026-03-09 18:36:50 +0000 UTC" firstStartedPulling="2026-03-09 18:36:50.622518092 +0000 UTC m=+747.783893948" lastFinishedPulling="2026-03-09 18:36:53.869823931 +0000 UTC m=+751.031199797" observedRunningTime="2026-03-09 18:36:54.354213134 +0000 UTC m=+751.515588990" watchObservedRunningTime="2026-03-09 18:36:55.350994113 +0000 UTC m=+752.512369999" Mar 09 18:36:55 crc kubenswrapper[4821]: I0309 18:36:55.361779 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-sk2qd" podStartSLOduration=2.084298185 podStartE2EDuration="5.361748726s" podCreationTimestamp="2026-03-09 18:36:50 +0000 UTC" firstStartedPulling="2026-03-09 18:36:51.583569927 +0000 UTC m=+748.744945783" lastFinishedPulling="2026-03-09 18:36:54.861020468 +0000 UTC m=+752.022396324" observedRunningTime="2026-03-09 18:36:55.349257136 +0000 UTC m=+752.510633012" watchObservedRunningTime="2026-03-09 18:36:55.361748726 +0000 UTC m=+752.523124622" Mar 09 18:36:57 crc kubenswrapper[4821]: I0309 18:36:57.348724 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" event={"ID":"f003e733-9aab-493c-ad84-3b6ec8bae6ee","Type":"ContainerStarted","Data":"9f80bca63c1eae3d0d5f7dac60c1d9b2fac46c7e23da447866ec64ff602a1609"} Mar 09 18:36:57 crc kubenswrapper[4821]: I0309 18:36:57.374027 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-g2mff" podStartSLOduration=1.6810881370000001 podStartE2EDuration="7.374010057s" podCreationTimestamp="2026-03-09 18:36:50 +0000 UTC" firstStartedPulling="2026-03-09 18:36:50.799567641 +0000 UTC m=+747.960943497" lastFinishedPulling="2026-03-09 18:36:56.492489551 +0000 UTC m=+753.653865417" observedRunningTime="2026-03-09 18:36:57.373736288 +0000 UTC m=+754.535112154" watchObservedRunningTime="2026-03-09 18:36:57.374010057 +0000 UTC m=+754.535385923" Mar 09 18:36:59 crc kubenswrapper[4821]: I0309 18:36:59.914008 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:36:59 crc kubenswrapper[4821]: I0309 18:36:59.914373 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:36:59 crc kubenswrapper[4821]: I0309 18:36:59.914446 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:36:59 crc kubenswrapper[4821]: I0309 18:36:59.916673 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de40e97a09448ee0292ed23dff4aa5fe956489128d71db5d125451ab26a025aa"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:36:59 crc kubenswrapper[4821]: I0309 18:36:59.916779 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://de40e97a09448ee0292ed23dff4aa5fe956489128d71db5d125451ab26a025aa" gracePeriod=600 Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.376073 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="de40e97a09448ee0292ed23dff4aa5fe956489128d71db5d125451ab26a025aa" exitCode=0 Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.376132 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"de40e97a09448ee0292ed23dff4aa5fe956489128d71db5d125451ab26a025aa"} Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.376609 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"c46da8911503c236934f3f2a2bf1a46aa040191100207d7942fc6bf2c08ce6de"} Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.376647 4821 scope.go:117] "RemoveContainer" containerID="23cc64d2d10a8b69113d207c0a3d0a0de2d2f613ac820eaa318a413143f856a4" Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.635933 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-msftq" Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.963272 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.963782 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:37:00 crc kubenswrapper[4821]: I0309 18:37:00.970264 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:37:01 crc kubenswrapper[4821]: I0309 18:37:01.396303 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:37:01 crc kubenswrapper[4821]: I0309 18:37:01.471885 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x9nnw"] Mar 09 18:37:11 crc kubenswrapper[4821]: I0309 18:37:11.189464 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hnr59" Mar 09 18:37:18 crc kubenswrapper[4821]: I0309 18:37:18.927717 4821 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.773627 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j"] Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.775031 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.777329 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.786782 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j"] Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.856595 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxtb\" (UniqueName: \"kubernetes.io/projected/7112cff8-f71e-4537-853f-155cfd48f5b6-kube-api-access-8kxtb\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.856628 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.856694 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.957303 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.957380 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxtb\" (UniqueName: \"kubernetes.io/projected/7112cff8-f71e-4537-853f-155cfd48f5b6-kube-api-access-8kxtb\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.957402 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.957851 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.957867 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:24.977969 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxtb\" (UniqueName: \"kubernetes.io/projected/7112cff8-f71e-4537-853f-155cfd48f5b6-kube-api-access-8kxtb\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:25.089365 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:25.328865 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j"] Mar 09 18:37:25 crc kubenswrapper[4821]: W0309 18:37:25.334558 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7112cff8_f71e_4537_853f_155cfd48f5b6.slice/crio-6b3d5e5586043a7c6e246831f49f78982cc5e1e46095037bc42985393dde399e WatchSource:0}: Error finding container 6b3d5e5586043a7c6e246831f49f78982cc5e1e46095037bc42985393dde399e: Status 404 returned error can't find the container with id 6b3d5e5586043a7c6e246831f49f78982cc5e1e46095037bc42985393dde399e Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:25.548771 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" event={"ID":"7112cff8-f71e-4537-853f-155cfd48f5b6","Type":"ContainerStarted","Data":"d90f2a949d3282ba3582667acedcd52639ceb90d2e786498fbcadc9c4ce8e0a3"} Mar 09 18:37:25 crc kubenswrapper[4821]: I0309 18:37:25.548977 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" event={"ID":"7112cff8-f71e-4537-853f-155cfd48f5b6","Type":"ContainerStarted","Data":"6b3d5e5586043a7c6e246831f49f78982cc5e1e46095037bc42985393dde399e"} Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.528934 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x9nnw" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerName="console" containerID="cri-o://26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c" gracePeriod=15 Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.555867 4821 generic.go:334] "Generic (PLEG): container finished" podID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerID="d90f2a949d3282ba3582667acedcd52639ceb90d2e786498fbcadc9c4ce8e0a3" exitCode=0 Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.555912 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" event={"ID":"7112cff8-f71e-4537-853f-155cfd48f5b6","Type":"ContainerDied","Data":"d90f2a949d3282ba3582667acedcd52639ceb90d2e786498fbcadc9c4ce8e0a3"} Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.567189 4821 patch_prober.go:28] interesting pod/console-f9d7485db-x9nnw container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.567429 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-x9nnw" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.882646 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x9nnw_8d862d47-cde7-4a39-aafe-3e2cf7ef451f/console/0.log" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.882725 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983233 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-oauth-config\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983355 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-serving-cert\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983374 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-oauth-serving-cert\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983425 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-service-ca\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983498 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-kube-api-access-jlv5x\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983526 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-trusted-ca-bundle\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.983555 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-config\") pod \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\" (UID: \"8d862d47-cde7-4a39-aafe-3e2cf7ef451f\") " Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.984474 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-config" (OuterVolumeSpecName: "console-config") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.984484 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-service-ca" (OuterVolumeSpecName: "service-ca") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.984637 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.985156 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.989778 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-kube-api-access-jlv5x" (OuterVolumeSpecName: "kube-api-access-jlv5x") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "kube-api-access-jlv5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.989660 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:37:26 crc kubenswrapper[4821]: I0309 18:37:26.990186 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8d862d47-cde7-4a39-aafe-3e2cf7ef451f" (UID: "8d862d47-cde7-4a39-aafe-3e2cf7ef451f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085238 4821 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085273 4821 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085285 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085338 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlv5x\" (UniqueName: \"kubernetes.io/projected/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-kube-api-access-jlv5x\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085351 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085359 4821 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.085368 4821 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8d862d47-cde7-4a39-aafe-3e2cf7ef451f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.105080 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqbr7"] Mar 09 18:37:27 crc kubenswrapper[4821]: E0309 18:37:27.105593 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerName="console" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.105635 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerName="console" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.105901 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerName="console" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.107888 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.114972 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqbr7"] Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.186625 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-catalog-content\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.186763 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-utilities\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.186801 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/8541b387-5bdf-4912-a5ff-34c503678ee0-kube-api-access-7httc\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.288628 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-utilities\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.288966 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/8541b387-5bdf-4912-a5ff-34c503678ee0-kube-api-access-7httc\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.289164 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-catalog-content\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.289230 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-utilities\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.289611 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-catalog-content\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.309726 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/8541b387-5bdf-4912-a5ff-34c503678ee0-kube-api-access-7httc\") pod \"redhat-operators-gqbr7\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.433495 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.574004 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x9nnw_8d862d47-cde7-4a39-aafe-3e2cf7ef451f/console/0.log" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.574341 4821 generic.go:334] "Generic (PLEG): container finished" podID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" containerID="26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c" exitCode=2 Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.574378 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x9nnw" event={"ID":"8d862d47-cde7-4a39-aafe-3e2cf7ef451f","Type":"ContainerDied","Data":"26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c"} Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.574407 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x9nnw" event={"ID":"8d862d47-cde7-4a39-aafe-3e2cf7ef451f","Type":"ContainerDied","Data":"36e66c19ed4f1a6d1a5d85f4f287fb8660f82128f68476d8f305683360b678c1"} Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.574430 4821 scope.go:117] "RemoveContainer" containerID="26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.574571 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x9nnw" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.601874 4821 scope.go:117] "RemoveContainer" containerID="26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c" Mar 09 18:37:27 crc kubenswrapper[4821]: E0309 18:37:27.602601 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c\": container with ID starting with 26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c not found: ID does not exist" containerID="26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.602642 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c"} err="failed to get container status \"26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c\": rpc error: code = NotFound desc = could not find container \"26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c\": container with ID starting with 26d3ea6ed586f3215fe359dfaf397672700f2ada8e234de8e832c932113d693c not found: ID does not exist" Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.602796 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x9nnw"] Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.609143 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x9nnw"] Mar 09 18:37:27 crc kubenswrapper[4821]: I0309 18:37:27.868250 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqbr7"] Mar 09 18:37:27 crc kubenswrapper[4821]: W0309 18:37:27.885365 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8541b387_5bdf_4912_a5ff_34c503678ee0.slice/crio-82bb16abbe68a2383f7498672dbc692f97b70028e3c8779df51b6db3f6e57ea0 WatchSource:0}: Error finding container 82bb16abbe68a2383f7498672dbc692f97b70028e3c8779df51b6db3f6e57ea0: Status 404 returned error can't find the container with id 82bb16abbe68a2383f7498672dbc692f97b70028e3c8779df51b6db3f6e57ea0 Mar 09 18:37:28 crc kubenswrapper[4821]: I0309 18:37:28.594443 4821 generic.go:334] "Generic (PLEG): container finished" podID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerID="ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b" exitCode=0 Mar 09 18:37:28 crc kubenswrapper[4821]: I0309 18:37:28.594818 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerDied","Data":"ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b"} Mar 09 18:37:28 crc kubenswrapper[4821]: I0309 18:37:28.594850 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerStarted","Data":"82bb16abbe68a2383f7498672dbc692f97b70028e3c8779df51b6db3f6e57ea0"} Mar 09 18:37:28 crc kubenswrapper[4821]: I0309 18:37:28.600738 4821 generic.go:334] "Generic (PLEG): container finished" podID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerID="98bb8518f5729a529dd3d4dd886dbabcd0b1fcecb92bcaaca69442c619177102" exitCode=0 Mar 09 18:37:28 crc kubenswrapper[4821]: I0309 18:37:28.600780 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" event={"ID":"7112cff8-f71e-4537-853f-155cfd48f5b6","Type":"ContainerDied","Data":"98bb8518f5729a529dd3d4dd886dbabcd0b1fcecb92bcaaca69442c619177102"} Mar 09 18:37:29 crc kubenswrapper[4821]: I0309 18:37:29.557743 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d862d47-cde7-4a39-aafe-3e2cf7ef451f" path="/var/lib/kubelet/pods/8d862d47-cde7-4a39-aafe-3e2cf7ef451f/volumes" Mar 09 18:37:29 crc kubenswrapper[4821]: I0309 18:37:29.608679 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerStarted","Data":"6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92"} Mar 09 18:37:29 crc kubenswrapper[4821]: I0309 18:37:29.611802 4821 generic.go:334] "Generic (PLEG): container finished" podID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerID="505e0023c57b6359a4523ebf88c31a4403ffcb2b8aecc472a432013bc78efe2e" exitCode=0 Mar 09 18:37:29 crc kubenswrapper[4821]: I0309 18:37:29.611870 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" event={"ID":"7112cff8-f71e-4537-853f-155cfd48f5b6","Type":"ContainerDied","Data":"505e0023c57b6359a4523ebf88c31a4403ffcb2b8aecc472a432013bc78efe2e"} Mar 09 18:37:30 crc kubenswrapper[4821]: I0309 18:37:30.623794 4821 generic.go:334] "Generic (PLEG): container finished" podID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerID="6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92" exitCode=0 Mar 09 18:37:30 crc kubenswrapper[4821]: I0309 18:37:30.623975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerDied","Data":"6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92"} Mar 09 18:37:30 crc kubenswrapper[4821]: I0309 18:37:30.967961 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.033483 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-util\") pod \"7112cff8-f71e-4537-853f-155cfd48f5b6\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.033570 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxtb\" (UniqueName: \"kubernetes.io/projected/7112cff8-f71e-4537-853f-155cfd48f5b6-kube-api-access-8kxtb\") pod \"7112cff8-f71e-4537-853f-155cfd48f5b6\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.033615 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-bundle\") pod \"7112cff8-f71e-4537-853f-155cfd48f5b6\" (UID: \"7112cff8-f71e-4537-853f-155cfd48f5b6\") " Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.034903 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-bundle" (OuterVolumeSpecName: "bundle") pod "7112cff8-f71e-4537-853f-155cfd48f5b6" (UID: "7112cff8-f71e-4537-853f-155cfd48f5b6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.040887 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7112cff8-f71e-4537-853f-155cfd48f5b6-kube-api-access-8kxtb" (OuterVolumeSpecName: "kube-api-access-8kxtb") pod "7112cff8-f71e-4537-853f-155cfd48f5b6" (UID: "7112cff8-f71e-4537-853f-155cfd48f5b6"). InnerVolumeSpecName "kube-api-access-8kxtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.053201 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-util" (OuterVolumeSpecName: "util") pod "7112cff8-f71e-4537-853f-155cfd48f5b6" (UID: "7112cff8-f71e-4537-853f-155cfd48f5b6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.134947 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-util\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.134979 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxtb\" (UniqueName: \"kubernetes.io/projected/7112cff8-f71e-4537-853f-155cfd48f5b6-kube-api-access-8kxtb\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.134990 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7112cff8-f71e-4537-853f-155cfd48f5b6-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.634208 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" event={"ID":"7112cff8-f71e-4537-853f-155cfd48f5b6","Type":"ContainerDied","Data":"6b3d5e5586043a7c6e246831f49f78982cc5e1e46095037bc42985393dde399e"} Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.634246 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.634259 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3d5e5586043a7c6e246831f49f78982cc5e1e46095037bc42985393dde399e" Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.637856 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerStarted","Data":"a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a"} Mar 09 18:37:31 crc kubenswrapper[4821]: I0309 18:37:31.663783 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqbr7" podStartSLOduration=2.110660957 podStartE2EDuration="4.663760359s" podCreationTimestamp="2026-03-09 18:37:27 +0000 UTC" firstStartedPulling="2026-03-09 18:37:28.595995244 +0000 UTC m=+785.757371100" lastFinishedPulling="2026-03-09 18:37:31.149094606 +0000 UTC m=+788.310470502" observedRunningTime="2026-03-09 18:37:31.658402284 +0000 UTC m=+788.819778180" watchObservedRunningTime="2026-03-09 18:37:31.663760359 +0000 UTC m=+788.825136215" Mar 09 18:37:37 crc kubenswrapper[4821]: I0309 18:37:37.434286 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:37 crc kubenswrapper[4821]: I0309 18:37:37.434846 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:38 crc kubenswrapper[4821]: I0309 18:37:38.506253 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqbr7" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="registry-server" probeResult="failure" output=< Mar 09 18:37:38 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 18:37:38 crc kubenswrapper[4821]: > Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.783506 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj"] Mar 09 18:37:41 crc kubenswrapper[4821]: E0309 18:37:41.784006 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="pull" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.784022 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="pull" Mar 09 18:37:41 crc kubenswrapper[4821]: E0309 18:37:41.784036 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="extract" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.784044 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="extract" Mar 09 18:37:41 crc kubenswrapper[4821]: E0309 18:37:41.784058 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="util" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.784065 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="util" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.784179 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7112cff8-f71e-4537-853f-155cfd48f5b6" containerName="extract" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.784716 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.786626 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.786627 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jt9vv" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.792098 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.792253 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.792111 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.806338 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj"] Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.879385 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ece940b4-1c75-4a27-af76-1d0987599334-webhook-cert\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.879735 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzgt9\" (UniqueName: \"kubernetes.io/projected/ece940b4-1c75-4a27-af76-1d0987599334-kube-api-access-lzgt9\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.879783 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ece940b4-1c75-4a27-af76-1d0987599334-apiservice-cert\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.981417 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ece940b4-1c75-4a27-af76-1d0987599334-webhook-cert\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.981490 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzgt9\" (UniqueName: \"kubernetes.io/projected/ece940b4-1c75-4a27-af76-1d0987599334-kube-api-access-lzgt9\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.981523 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ece940b4-1c75-4a27-af76-1d0987599334-apiservice-cert\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.988277 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ece940b4-1c75-4a27-af76-1d0987599334-apiservice-cert\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:41 crc kubenswrapper[4821]: I0309 18:37:41.991921 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ece940b4-1c75-4a27-af76-1d0987599334-webhook-cert\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.000134 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzgt9\" (UniqueName: \"kubernetes.io/projected/ece940b4-1c75-4a27-af76-1d0987599334-kube-api-access-lzgt9\") pod \"metallb-operator-controller-manager-858bc4f469-wp8gj\" (UID: \"ece940b4-1c75-4a27-af76-1d0987599334\") " pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.100975 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.111785 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg"] Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.112502 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.114580 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jffp5" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.114918 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.115193 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.169563 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg"] Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.184253 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-apiservice-cert\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.184291 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-webhook-cert\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.184387 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpj74\" (UniqueName: \"kubernetes.io/projected/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-kube-api-access-qpj74\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.285555 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-apiservice-cert\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.285880 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-webhook-cert\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.285951 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpj74\" (UniqueName: \"kubernetes.io/projected/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-kube-api-access-qpj74\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.291236 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-apiservice-cert\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.291417 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-webhook-cert\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.318019 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpj74\" (UniqueName: \"kubernetes.io/projected/57e177f6-8afa-42f4-ac0c-2b43f01cf06a-kube-api-access-qpj74\") pod \"metallb-operator-webhook-server-5f89859c4b-c6xkg\" (UID: \"57e177f6-8afa-42f4-ac0c-2b43f01cf06a\") " pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.358892 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj"] Mar 09 18:37:42 crc kubenswrapper[4821]: W0309 18:37:42.365772 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podece940b4_1c75_4a27_af76_1d0987599334.slice/crio-0a4015920824c91524386b25072bf3bc230c630f444d5e5806a32c6ec586b833 WatchSource:0}: Error finding container 0a4015920824c91524386b25072bf3bc230c630f444d5e5806a32c6ec586b833: Status 404 returned error can't find the container with id 0a4015920824c91524386b25072bf3bc230c630f444d5e5806a32c6ec586b833 Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.472589 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.691343 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg"] Mar 09 18:37:42 crc kubenswrapper[4821]: W0309 18:37:42.696129 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57e177f6_8afa_42f4_ac0c_2b43f01cf06a.slice/crio-b6c5f7830a8b947ad4d82bca7164e5625ac71cbbe8ab260f8099da4e7b8e0b39 WatchSource:0}: Error finding container b6c5f7830a8b947ad4d82bca7164e5625ac71cbbe8ab260f8099da4e7b8e0b39: Status 404 returned error can't find the container with id b6c5f7830a8b947ad4d82bca7164e5625ac71cbbe8ab260f8099da4e7b8e0b39 Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.703707 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" event={"ID":"57e177f6-8afa-42f4-ac0c-2b43f01cf06a","Type":"ContainerStarted","Data":"b6c5f7830a8b947ad4d82bca7164e5625ac71cbbe8ab260f8099da4e7b8e0b39"} Mar 09 18:37:42 crc kubenswrapper[4821]: I0309 18:37:42.705747 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" event={"ID":"ece940b4-1c75-4a27-af76-1d0987599334","Type":"ContainerStarted","Data":"0a4015920824c91524386b25072bf3bc230c630f444d5e5806a32c6ec586b833"} Mar 09 18:37:45 crc kubenswrapper[4821]: I0309 18:37:45.726127 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" event={"ID":"ece940b4-1c75-4a27-af76-1d0987599334","Type":"ContainerStarted","Data":"a622bc3568548a044e01e49439c3926078580ea21d592a54f7eb2940583c08ff"} Mar 09 18:37:45 crc kubenswrapper[4821]: I0309 18:37:45.726859 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:37:45 crc kubenswrapper[4821]: I0309 18:37:45.766254 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" podStartSLOduration=1.726483692 podStartE2EDuration="4.766228627s" podCreationTimestamp="2026-03-09 18:37:41 +0000 UTC" firstStartedPulling="2026-03-09 18:37:42.368020574 +0000 UTC m=+799.529396430" lastFinishedPulling="2026-03-09 18:37:45.407765509 +0000 UTC m=+802.569141365" observedRunningTime="2026-03-09 18:37:45.75747172 +0000 UTC m=+802.918847606" watchObservedRunningTime="2026-03-09 18:37:45.766228627 +0000 UTC m=+802.927604513" Mar 09 18:37:47 crc kubenswrapper[4821]: I0309 18:37:47.486069 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:47 crc kubenswrapper[4821]: I0309 18:37:47.532297 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:47 crc kubenswrapper[4821]: I0309 18:37:47.739266 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" event={"ID":"57e177f6-8afa-42f4-ac0c-2b43f01cf06a","Type":"ContainerStarted","Data":"ff1386ae71df9b3b915129f6850ffe3bd9a742c4d52606484fff35e8f65a8271"} Mar 09 18:37:47 crc kubenswrapper[4821]: I0309 18:37:47.757868 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" podStartSLOduration=0.898046072 podStartE2EDuration="5.757847128s" podCreationTimestamp="2026-03-09 18:37:42 +0000 UTC" firstStartedPulling="2026-03-09 18:37:42.698842602 +0000 UTC m=+799.860218458" lastFinishedPulling="2026-03-09 18:37:47.558643658 +0000 UTC m=+804.720019514" observedRunningTime="2026-03-09 18:37:47.756253835 +0000 UTC m=+804.917629701" watchObservedRunningTime="2026-03-09 18:37:47.757847128 +0000 UTC m=+804.919222984" Mar 09 18:37:48 crc kubenswrapper[4821]: I0309 18:37:48.744646 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.286669 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqbr7"] Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.286906 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqbr7" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="registry-server" containerID="cri-o://a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a" gracePeriod=2 Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.715224 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.754051 4821 generic.go:334] "Generic (PLEG): container finished" podID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerID="a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a" exitCode=0 Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.754785 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqbr7" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.755053 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerDied","Data":"a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a"} Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.755078 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqbr7" event={"ID":"8541b387-5bdf-4912-a5ff-34c503678ee0","Type":"ContainerDied","Data":"82bb16abbe68a2383f7498672dbc692f97b70028e3c8779df51b6db3f6e57ea0"} Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.755092 4821 scope.go:117] "RemoveContainer" containerID="a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.778735 4821 scope.go:117] "RemoveContainer" containerID="6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.798625 4821 scope.go:117] "RemoveContainer" containerID="ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.812107 4821 scope.go:117] "RemoveContainer" containerID="a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a" Mar 09 18:37:49 crc kubenswrapper[4821]: E0309 18:37:49.812507 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a\": container with ID starting with a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a not found: ID does not exist" containerID="a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.812542 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a"} err="failed to get container status \"a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a\": rpc error: code = NotFound desc = could not find container \"a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a\": container with ID starting with a089bda1c8114b02c68ae80a18827216044ac20ed91d8e34c6e4a22044f8331a not found: ID does not exist" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.812565 4821 scope.go:117] "RemoveContainer" containerID="6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92" Mar 09 18:37:49 crc kubenswrapper[4821]: E0309 18:37:49.812840 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92\": container with ID starting with 6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92 not found: ID does not exist" containerID="6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.812860 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92"} err="failed to get container status \"6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92\": rpc error: code = NotFound desc = could not find container \"6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92\": container with ID starting with 6abb578ede3409b1e92267f1ea68c1d13c9b2247c493c3bc176ba14c266f7c92 not found: ID does not exist" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.812871 4821 scope.go:117] "RemoveContainer" containerID="ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b" Mar 09 18:37:49 crc kubenswrapper[4821]: E0309 18:37:49.813146 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b\": container with ID starting with ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b not found: ID does not exist" containerID="ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.813204 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b"} err="failed to get container status \"ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b\": rpc error: code = NotFound desc = could not find container \"ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b\": container with ID starting with ae36c0a27552bb7f501d3b0901fca3d0dc7ac0afdb9be67c3c6634e94ef5208b not found: ID does not exist" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.894837 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-catalog-content\") pod \"8541b387-5bdf-4912-a5ff-34c503678ee0\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.894898 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-utilities\") pod \"8541b387-5bdf-4912-a5ff-34c503678ee0\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.895026 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/8541b387-5bdf-4912-a5ff-34c503678ee0-kube-api-access-7httc\") pod \"8541b387-5bdf-4912-a5ff-34c503678ee0\" (UID: \"8541b387-5bdf-4912-a5ff-34c503678ee0\") " Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.897165 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-utilities" (OuterVolumeSpecName: "utilities") pod "8541b387-5bdf-4912-a5ff-34c503678ee0" (UID: "8541b387-5bdf-4912-a5ff-34c503678ee0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.902907 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8541b387-5bdf-4912-a5ff-34c503678ee0-kube-api-access-7httc" (OuterVolumeSpecName: "kube-api-access-7httc") pod "8541b387-5bdf-4912-a5ff-34c503678ee0" (UID: "8541b387-5bdf-4912-a5ff-34c503678ee0"). InnerVolumeSpecName "kube-api-access-7httc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.996737 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:49 crc kubenswrapper[4821]: I0309 18:37:49.996797 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/8541b387-5bdf-4912-a5ff-34c503678ee0-kube-api-access-7httc\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:50 crc kubenswrapper[4821]: I0309 18:37:50.025655 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8541b387-5bdf-4912-a5ff-34c503678ee0" (UID: "8541b387-5bdf-4912-a5ff-34c503678ee0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:37:50 crc kubenswrapper[4821]: I0309 18:37:50.080218 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqbr7"] Mar 09 18:37:50 crc kubenswrapper[4821]: I0309 18:37:50.085933 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqbr7"] Mar 09 18:37:50 crc kubenswrapper[4821]: I0309 18:37:50.098120 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8541b387-5bdf-4912-a5ff-34c503678ee0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:37:51 crc kubenswrapper[4821]: I0309 18:37:51.564665 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" path="/var/lib/kubelet/pods/8541b387-5bdf-4912-a5ff-34c503678ee0/volumes" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.147813 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551358-sb9b7"] Mar 09 18:38:00 crc kubenswrapper[4821]: E0309 18:38:00.148640 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="extract-content" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.148656 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="extract-content" Mar 09 18:38:00 crc kubenswrapper[4821]: E0309 18:38:00.148677 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="extract-utilities" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.148685 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="extract-utilities" Mar 09 18:38:00 crc kubenswrapper[4821]: E0309 18:38:00.148698 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="registry-server" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.148709 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="registry-server" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.148872 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8541b387-5bdf-4912-a5ff-34c503678ee0" containerName="registry-server" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.149444 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.151913 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.152796 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.154461 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.165251 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551358-sb9b7"] Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.248182 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8kzk\" (UniqueName: \"kubernetes.io/projected/53bcb16a-e06a-4552-aa19-dca354931cee-kube-api-access-c8kzk\") pod \"auto-csr-approver-29551358-sb9b7\" (UID: \"53bcb16a-e06a-4552-aa19-dca354931cee\") " pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.349869 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8kzk\" (UniqueName: \"kubernetes.io/projected/53bcb16a-e06a-4552-aa19-dca354931cee-kube-api-access-c8kzk\") pod \"auto-csr-approver-29551358-sb9b7\" (UID: \"53bcb16a-e06a-4552-aa19-dca354931cee\") " pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.443143 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8kzk\" (UniqueName: \"kubernetes.io/projected/53bcb16a-e06a-4552-aa19-dca354931cee-kube-api-access-c8kzk\") pod \"auto-csr-approver-29551358-sb9b7\" (UID: \"53bcb16a-e06a-4552-aa19-dca354931cee\") " pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.465441 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:00 crc kubenswrapper[4821]: I0309 18:38:00.869767 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551358-sb9b7"] Mar 09 18:38:00 crc kubenswrapper[4821]: W0309 18:38:00.881390 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53bcb16a_e06a_4552_aa19_dca354931cee.slice/crio-9b0b1f15b810ef12638517827f13a0bf18cdedbc3a7f17fc4d6f84f189d921c7 WatchSource:0}: Error finding container 9b0b1f15b810ef12638517827f13a0bf18cdedbc3a7f17fc4d6f84f189d921c7: Status 404 returned error can't find the container with id 9b0b1f15b810ef12638517827f13a0bf18cdedbc3a7f17fc4d6f84f189d921c7 Mar 09 18:38:01 crc kubenswrapper[4821]: I0309 18:38:01.825778 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" event={"ID":"53bcb16a-e06a-4552-aa19-dca354931cee","Type":"ContainerStarted","Data":"9b0b1f15b810ef12638517827f13a0bf18cdedbc3a7f17fc4d6f84f189d921c7"} Mar 09 18:38:02 crc kubenswrapper[4821]: I0309 18:38:02.481647 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f89859c4b-c6xkg" Mar 09 18:38:02 crc kubenswrapper[4821]: I0309 18:38:02.832492 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" event={"ID":"53bcb16a-e06a-4552-aa19-dca354931cee","Type":"ContainerStarted","Data":"8e7abab222a1b625a74468adff80808e049c74a74fef05c43e8ce2b31ada94d1"} Mar 09 18:38:02 crc kubenswrapper[4821]: I0309 18:38:02.858783 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" podStartSLOduration=1.436699472 podStartE2EDuration="2.858761584s" podCreationTimestamp="2026-03-09 18:38:00 +0000 UTC" firstStartedPulling="2026-03-09 18:38:00.88401857 +0000 UTC m=+818.045394446" lastFinishedPulling="2026-03-09 18:38:02.306080702 +0000 UTC m=+819.467456558" observedRunningTime="2026-03-09 18:38:02.856091462 +0000 UTC m=+820.017467318" watchObservedRunningTime="2026-03-09 18:38:02.858761584 +0000 UTC m=+820.020137460" Mar 09 18:38:03 crc kubenswrapper[4821]: I0309 18:38:03.842790 4821 generic.go:334] "Generic (PLEG): container finished" podID="53bcb16a-e06a-4552-aa19-dca354931cee" containerID="8e7abab222a1b625a74468adff80808e049c74a74fef05c43e8ce2b31ada94d1" exitCode=0 Mar 09 18:38:03 crc kubenswrapper[4821]: I0309 18:38:03.843179 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" event={"ID":"53bcb16a-e06a-4552-aa19-dca354931cee","Type":"ContainerDied","Data":"8e7abab222a1b625a74468adff80808e049c74a74fef05c43e8ce2b31ada94d1"} Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.149547 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.315648 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8kzk\" (UniqueName: \"kubernetes.io/projected/53bcb16a-e06a-4552-aa19-dca354931cee-kube-api-access-c8kzk\") pod \"53bcb16a-e06a-4552-aa19-dca354931cee\" (UID: \"53bcb16a-e06a-4552-aa19-dca354931cee\") " Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.323824 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53bcb16a-e06a-4552-aa19-dca354931cee-kube-api-access-c8kzk" (OuterVolumeSpecName: "kube-api-access-c8kzk") pod "53bcb16a-e06a-4552-aa19-dca354931cee" (UID: "53bcb16a-e06a-4552-aa19-dca354931cee"). InnerVolumeSpecName "kube-api-access-c8kzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.417671 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8kzk\" (UniqueName: \"kubernetes.io/projected/53bcb16a-e06a-4552-aa19-dca354931cee-kube-api-access-c8kzk\") on node \"crc\" DevicePath \"\"" Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.857191 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" event={"ID":"53bcb16a-e06a-4552-aa19-dca354931cee","Type":"ContainerDied","Data":"9b0b1f15b810ef12638517827f13a0bf18cdedbc3a7f17fc4d6f84f189d921c7"} Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.857261 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0b1f15b810ef12638517827f13a0bf18cdedbc3a7f17fc4d6f84f189d921c7" Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.857262 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551358-sb9b7" Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.903919 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551352-njjrt"] Mar 09 18:38:05 crc kubenswrapper[4821]: I0309 18:38:05.911007 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551352-njjrt"] Mar 09 18:38:07 crc kubenswrapper[4821]: I0309 18:38:07.558912 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a893b80-b63c-4639-ab4f-974bc226128a" path="/var/lib/kubelet/pods/8a893b80-b63c-4639-ab4f-974bc226128a/volumes" Mar 09 18:38:22 crc kubenswrapper[4821]: I0309 18:38:22.104180 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-858bc4f469-wp8gj" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.532399 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pxc5m"] Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.533049 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53bcb16a-e06a-4552-aa19-dca354931cee" containerName="oc" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.533068 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="53bcb16a-e06a-4552-aa19-dca354931cee" containerName="oc" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.533193 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="53bcb16a-e06a-4552-aa19-dca354931cee" containerName="oc" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.535629 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.537572 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qgvvt" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.538041 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.538191 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.547259 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l"] Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.548019 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.549726 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576786 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64621269-3b51-4cc2-89c8-0fd5ad067fd7-cert\") pod \"frr-k8s-webhook-server-7f989f654f-7q47l\" (UID: \"64621269-3b51-4cc2-89c8-0fd5ad067fd7\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576865 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-reloader\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576883 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxsbl\" (UniqueName: \"kubernetes.io/projected/64621269-3b51-4cc2-89c8-0fd5ad067fd7-kube-api-access-rxsbl\") pod \"frr-k8s-webhook-server-7f989f654f-7q47l\" (UID: \"64621269-3b51-4cc2-89c8-0fd5ad067fd7\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576906 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576923 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gdb\" (UniqueName: \"kubernetes.io/projected/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-kube-api-access-r8gdb\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576973 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-sockets\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.576987 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics-certs\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.577012 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-conf\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.577035 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-startup\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.579800 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l"] Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.653914 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5sdkw"] Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.657108 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.660188 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-jfl8k"] Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.661075 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.663989 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2rxfv" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.664202 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.664363 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.666083 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.666368 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679820 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-conf\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679869 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-metrics-certs\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679895 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-startup\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679915 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679940 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw2f2\" (UniqueName: \"kubernetes.io/projected/cd899ccb-4a21-4e1f-93a3-39451435e6f8-kube-api-access-rw2f2\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679969 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24vn\" (UniqueName: \"kubernetes.io/projected/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-kube-api-access-t24vn\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.679987 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64621269-3b51-4cc2-89c8-0fd5ad067fd7-cert\") pod \"frr-k8s-webhook-server-7f989f654f-7q47l\" (UID: \"64621269-3b51-4cc2-89c8-0fd5ad067fd7\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680013 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-reloader\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680030 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxsbl\" (UniqueName: \"kubernetes.io/projected/64621269-3b51-4cc2-89c8-0fd5ad067fd7-kube-api-access-rxsbl\") pod \"frr-k8s-webhook-server-7f989f654f-7q47l\" (UID: \"64621269-3b51-4cc2-89c8-0fd5ad067fd7\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680051 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680068 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gdb\" (UniqueName: \"kubernetes.io/projected/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-kube-api-access-r8gdb\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680091 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-cert\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680105 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metallb-excludel2\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680124 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metrics-certs\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680144 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-sockets\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680159 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics-certs\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.680267 4821 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.680329 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics-certs podName:ea8f9c80-04cb-455e-a2fc-2ed5b028a79c nodeName:}" failed. No retries permitted until 2026-03-09 18:38:24.180299684 +0000 UTC m=+841.341675540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics-certs") pod "frr-k8s-pxc5m" (UID: "ea8f9c80-04cb-455e-a2fc-2ed5b028a79c") : secret "frr-k8s-certs-secret" not found Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.680469 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-conf\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.681036 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-startup\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.682259 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.682520 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-frr-sockets\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.684834 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-reloader\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.691691 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-jfl8k"] Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.704445 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64621269-3b51-4cc2-89c8-0fd5ad067fd7-cert\") pod \"frr-k8s-webhook-server-7f989f654f-7q47l\" (UID: \"64621269-3b51-4cc2-89c8-0fd5ad067fd7\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.708598 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxsbl\" (UniqueName: \"kubernetes.io/projected/64621269-3b51-4cc2-89c8-0fd5ad067fd7-kube-api-access-rxsbl\") pod \"frr-k8s-webhook-server-7f989f654f-7q47l\" (UID: \"64621269-3b51-4cc2-89c8-0fd5ad067fd7\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.712999 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gdb\" (UniqueName: \"kubernetes.io/projected/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-kube-api-access-r8gdb\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.780947 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-cert\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.780986 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metallb-excludel2\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.781011 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metrics-certs\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.781073 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-metrics-certs\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.781092 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.781114 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw2f2\" (UniqueName: \"kubernetes.io/projected/cd899ccb-4a21-4e1f-93a3-39451435e6f8-kube-api-access-rw2f2\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.781140 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24vn\" (UniqueName: \"kubernetes.io/projected/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-kube-api-access-t24vn\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.781577 4821 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.781687 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metrics-certs podName:cd899ccb-4a21-4e1f-93a3-39451435e6f8 nodeName:}" failed. No retries permitted until 2026-03-09 18:38:24.281670832 +0000 UTC m=+841.443046688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metrics-certs") pod "speaker-5sdkw" (UID: "cd899ccb-4a21-4e1f-93a3-39451435e6f8") : secret "speaker-certs-secret" not found Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.781729 4821 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.781832 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-metrics-certs podName:b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b nodeName:}" failed. No retries permitted until 2026-03-09 18:38:24.281822636 +0000 UTC m=+841.443198492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-metrics-certs") pod "controller-86ddb6bd46-jfl8k" (UID: "b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b") : secret "controller-certs-secret" not found Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.781765 4821 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 09 18:38:23 crc kubenswrapper[4821]: E0309 18:38:23.781962 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist podName:cd899ccb-4a21-4e1f-93a3-39451435e6f8 nodeName:}" failed. No retries permitted until 2026-03-09 18:38:24.2819554 +0000 UTC m=+841.443331256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist") pod "speaker-5sdkw" (UID: "cd899ccb-4a21-4e1f-93a3-39451435e6f8") : secret "metallb-memberlist" not found Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.782491 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metallb-excludel2\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.783373 4821 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.797123 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-cert\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.797619 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24vn\" (UniqueName: \"kubernetes.io/projected/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-kube-api-access-t24vn\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.799388 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw2f2\" (UniqueName: \"kubernetes.io/projected/cd899ccb-4a21-4e1f-93a3-39451435e6f8-kube-api-access-rw2f2\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:23 crc kubenswrapper[4821]: I0309 18:38:23.909864 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.188108 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics-certs\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.192039 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ea8f9c80-04cb-455e-a2fc-2ed5b028a79c-metrics-certs\") pod \"frr-k8s-pxc5m\" (UID: \"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c\") " pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.202113 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.289558 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metrics-certs\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.289620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-metrics-certs\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.289647 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:24 crc kubenswrapper[4821]: E0309 18:38:24.289810 4821 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 09 18:38:24 crc kubenswrapper[4821]: E0309 18:38:24.289864 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist podName:cd899ccb-4a21-4e1f-93a3-39451435e6f8 nodeName:}" failed. No retries permitted until 2026-03-09 18:38:25.289847619 +0000 UTC m=+842.451223475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist") pod "speaker-5sdkw" (UID: "cd899ccb-4a21-4e1f-93a3-39451435e6f8") : secret "metallb-memberlist" not found Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.294621 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-metrics-certs\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.295793 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b-metrics-certs\") pod \"controller-86ddb6bd46-jfl8k\" (UID: \"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b\") " pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.313270 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l"] Mar 09 18:38:24 crc kubenswrapper[4821]: W0309 18:38:24.314468 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64621269_3b51_4cc2_89c8_0fd5ad067fd7.slice/crio-1de53a3ba6341e14bb0e32fb9723078ced3e16c422f6cb39e1b4199bc0bd2c51 WatchSource:0}: Error finding container 1de53a3ba6341e14bb0e32fb9723078ced3e16c422f6cb39e1b4199bc0bd2c51: Status 404 returned error can't find the container with id 1de53a3ba6341e14bb0e32fb9723078ced3e16c422f6cb39e1b4199bc0bd2c51 Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.584075 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:24 crc kubenswrapper[4821]: I0309 18:38:24.819457 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-jfl8k"] Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.003764 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-jfl8k" event={"ID":"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b","Type":"ContainerStarted","Data":"e32b50ae0510ac49f333b1bbf6bba80e7eeeaef81d70cf56b99ccaa490a8f7a6"} Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.004269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-jfl8k" event={"ID":"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b","Type":"ContainerStarted","Data":"0863135b120b184605df4b844da4a775e8b2aecf8c9208f8c7df11c6fb782bff"} Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.005651 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"3fd30038278bb0f87de5e05c0b6af7985e58c88786d95bc22d77dd8ab721051d"} Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.007332 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" event={"ID":"64621269-3b51-4cc2-89c8-0fd5ad067fd7","Type":"ContainerStarted","Data":"1de53a3ba6341e14bb0e32fb9723078ced3e16c422f6cb39e1b4199bc0bd2c51"} Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.305040 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.323296 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cd899ccb-4a21-4e1f-93a3-39451435e6f8-memberlist\") pod \"speaker-5sdkw\" (UID: \"cd899ccb-4a21-4e1f-93a3-39451435e6f8\") " pod="metallb-system/speaker-5sdkw" Mar 09 18:38:25 crc kubenswrapper[4821]: I0309 18:38:25.474715 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5sdkw" Mar 09 18:38:25 crc kubenswrapper[4821]: W0309 18:38:25.503936 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd899ccb_4a21_4e1f_93a3_39451435e6f8.slice/crio-0d2ce4106993fe2d8d6ca117bae3893f45091aa3e62c2f57ce50587ac85eb753 WatchSource:0}: Error finding container 0d2ce4106993fe2d8d6ca117bae3893f45091aa3e62c2f57ce50587ac85eb753: Status 404 returned error can't find the container with id 0d2ce4106993fe2d8d6ca117bae3893f45091aa3e62c2f57ce50587ac85eb753 Mar 09 18:38:26 crc kubenswrapper[4821]: I0309 18:38:26.013900 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5sdkw" event={"ID":"cd899ccb-4a21-4e1f-93a3-39451435e6f8","Type":"ContainerStarted","Data":"614df05703854ebf29d0dcaec6057bf5e941b5a5383c8b64f24593ed79a061f8"} Mar 09 18:38:26 crc kubenswrapper[4821]: I0309 18:38:26.014221 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5sdkw" event={"ID":"cd899ccb-4a21-4e1f-93a3-39451435e6f8","Type":"ContainerStarted","Data":"0d2ce4106993fe2d8d6ca117bae3893f45091aa3e62c2f57ce50587ac85eb753"} Mar 09 18:38:26 crc kubenswrapper[4821]: I0309 18:38:26.015731 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-jfl8k" event={"ID":"b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b","Type":"ContainerStarted","Data":"7dfad71d40770fb73c8d12354c4a0de48fd2ca60d23c1f33b55ac609f70807d3"} Mar 09 18:38:26 crc kubenswrapper[4821]: I0309 18:38:26.015875 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:26 crc kubenswrapper[4821]: I0309 18:38:26.043735 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-jfl8k" podStartSLOduration=3.043714674 podStartE2EDuration="3.043714674s" podCreationTimestamp="2026-03-09 18:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:38:26.038993885 +0000 UTC m=+843.200369741" watchObservedRunningTime="2026-03-09 18:38:26.043714674 +0000 UTC m=+843.205090540" Mar 09 18:38:27 crc kubenswrapper[4821]: I0309 18:38:27.025272 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5sdkw" event={"ID":"cd899ccb-4a21-4e1f-93a3-39451435e6f8","Type":"ContainerStarted","Data":"d22a87d8c774e743f6d6577146a18cc30e9a289ad132228a4529fb3869bcf1f8"} Mar 09 18:38:27 crc kubenswrapper[4821]: I0309 18:38:27.025336 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5sdkw" Mar 09 18:38:27 crc kubenswrapper[4821]: I0309 18:38:27.043637 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5sdkw" podStartSLOduration=4.043621551 podStartE2EDuration="4.043621551s" podCreationTimestamp="2026-03-09 18:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:38:27.042279454 +0000 UTC m=+844.203655310" watchObservedRunningTime="2026-03-09 18:38:27.043621551 +0000 UTC m=+844.204997407" Mar 09 18:38:32 crc kubenswrapper[4821]: I0309 18:38:32.052047 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" event={"ID":"64621269-3b51-4cc2-89c8-0fd5ad067fd7","Type":"ContainerStarted","Data":"cc4e94b87979b48ad231b9e67d94158c77ebdd6ed63d66df9f60ba085eab95bc"} Mar 09 18:38:32 crc kubenswrapper[4821]: I0309 18:38:32.053446 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:32 crc kubenswrapper[4821]: I0309 18:38:32.054960 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea8f9c80-04cb-455e-a2fc-2ed5b028a79c" containerID="565169dcef6a4f14b5daadc8dd493efafc229669118cb632e03960a4e66b7eca" exitCode=0 Mar 09 18:38:32 crc kubenswrapper[4821]: I0309 18:38:32.054988 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerDied","Data":"565169dcef6a4f14b5daadc8dd493efafc229669118cb632e03960a4e66b7eca"} Mar 09 18:38:32 crc kubenswrapper[4821]: I0309 18:38:32.076518 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" podStartSLOduration=2.253518313 podStartE2EDuration="9.076502152s" podCreationTimestamp="2026-03-09 18:38:23 +0000 UTC" firstStartedPulling="2026-03-09 18:38:24.317050186 +0000 UTC m=+841.478426042" lastFinishedPulling="2026-03-09 18:38:31.140034025 +0000 UTC m=+848.301409881" observedRunningTime="2026-03-09 18:38:32.075115824 +0000 UTC m=+849.236491700" watchObservedRunningTime="2026-03-09 18:38:32.076502152 +0000 UTC m=+849.237877998" Mar 09 18:38:33 crc kubenswrapper[4821]: I0309 18:38:33.062602 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea8f9c80-04cb-455e-a2fc-2ed5b028a79c" containerID="57c6e9b41be122092b1dfa290c2816aa43f94aa37fd8ce178132a00965838ae3" exitCode=0 Mar 09 18:38:33 crc kubenswrapper[4821]: I0309 18:38:33.062650 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerDied","Data":"57c6e9b41be122092b1dfa290c2816aa43f94aa37fd8ce178132a00965838ae3"} Mar 09 18:38:34 crc kubenswrapper[4821]: I0309 18:38:34.073228 4821 generic.go:334] "Generic (PLEG): container finished" podID="ea8f9c80-04cb-455e-a2fc-2ed5b028a79c" containerID="2020e4d395cd0a0eb3eadab5321cbaa84df0ccbcd31aeb6badd665acaf280d4d" exitCode=0 Mar 09 18:38:34 crc kubenswrapper[4821]: I0309 18:38:34.074889 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerDied","Data":"2020e4d395cd0a0eb3eadab5321cbaa84df0ccbcd31aeb6badd665acaf280d4d"} Mar 09 18:38:34 crc kubenswrapper[4821]: I0309 18:38:34.588844 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-jfl8k" Mar 09 18:38:35 crc kubenswrapper[4821]: I0309 18:38:35.084241 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"2a6a48790a8a5955d4a60fb8caa1c9ab079e6a36e31c213224d0dee061963afb"} Mar 09 18:38:35 crc kubenswrapper[4821]: I0309 18:38:35.084281 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"20e96c90d1c3eaece60be97a0456dfd596cdead92af3869e716e6c1c84e0b504"} Mar 09 18:38:35 crc kubenswrapper[4821]: I0309 18:38:35.084292 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"ec4ef1a0425c1f90f4562dea1f257d4fd355217b5900dcede60fd0acbeec2169"} Mar 09 18:38:35 crc kubenswrapper[4821]: I0309 18:38:35.084302 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"3bc18a49d4499985288444605d8442d8c95ce95a389253896addc22c3805df96"} Mar 09 18:38:35 crc kubenswrapper[4821]: I0309 18:38:35.479456 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5sdkw" Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.093882 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"ff7ec55f07ac6f4e7de1adf7d90f7c26780c03b536e8d3859977848c1ce26645"} Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.093936 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pxc5m" event={"ID":"ea8f9c80-04cb-455e-a2fc-2ed5b028a79c","Type":"ContainerStarted","Data":"2f2661275995f4cfda89aa8f9f9fd87e686d52dd7b193cc97bde6f29b22407f8"} Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.095133 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.122715 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pxc5m" podStartSLOduration=6.3185774089999995 podStartE2EDuration="13.122691937s" podCreationTimestamp="2026-03-09 18:38:23 +0000 UTC" firstStartedPulling="2026-03-09 18:38:24.342385603 +0000 UTC m=+841.503761459" lastFinishedPulling="2026-03-09 18:38:31.146500131 +0000 UTC m=+848.307875987" observedRunningTime="2026-03-09 18:38:36.120143838 +0000 UTC m=+853.281519704" watchObservedRunningTime="2026-03-09 18:38:36.122691937 +0000 UTC m=+853.284067813" Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.812583 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4"] Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.813956 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.815634 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 09 18:38:36 crc kubenswrapper[4821]: I0309 18:38:36.820777 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4"] Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.005040 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6hf\" (UniqueName: \"kubernetes.io/projected/332c9a2e-4daa-4bc4-8020-1938abeccb55-kube-api-access-rs6hf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.005142 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.005278 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.106364 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs6hf\" (UniqueName: \"kubernetes.io/projected/332c9a2e-4daa-4bc4-8020-1938abeccb55-kube-api-access-rs6hf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.106409 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.106444 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.107071 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.107080 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.125134 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs6hf\" (UniqueName: \"kubernetes.io/projected/332c9a2e-4daa-4bc4-8020-1938abeccb55-kube-api-access-rs6hf\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.133609 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:37 crc kubenswrapper[4821]: I0309 18:38:37.437678 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4"] Mar 09 18:38:38 crc kubenswrapper[4821]: I0309 18:38:38.105398 4821 generic.go:334] "Generic (PLEG): container finished" podID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerID="b8c9cda42ce8b4e1a87e5369813d748908ec5185a3fa0a0e027eb90f681c1340" exitCode=0 Mar 09 18:38:38 crc kubenswrapper[4821]: I0309 18:38:38.105446 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" event={"ID":"332c9a2e-4daa-4bc4-8020-1938abeccb55","Type":"ContainerDied","Data":"b8c9cda42ce8b4e1a87e5369813d748908ec5185a3fa0a0e027eb90f681c1340"} Mar 09 18:38:38 crc kubenswrapper[4821]: I0309 18:38:38.105667 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" event={"ID":"332c9a2e-4daa-4bc4-8020-1938abeccb55","Type":"ContainerStarted","Data":"4791bc0fc9dae50b99a18e11cf0a71d944cf73147877149717e83a7203295ca5"} Mar 09 18:38:39 crc kubenswrapper[4821]: I0309 18:38:39.202671 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:39 crc kubenswrapper[4821]: I0309 18:38:39.237514 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:40 crc kubenswrapper[4821]: I0309 18:38:40.202175 4821 scope.go:117] "RemoveContainer" containerID="389cace8aa6f9581c4fea45a07227e794aed004e9bf5f478020daa28f9f29b78" Mar 09 18:38:42 crc kubenswrapper[4821]: I0309 18:38:42.136822 4821 generic.go:334] "Generic (PLEG): container finished" podID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerID="ecdc2bacd4a0eab2160dc65d088b78fc7ac94aab6caf326d0ed9221c60f18c25" exitCode=0 Mar 09 18:38:42 crc kubenswrapper[4821]: I0309 18:38:42.136875 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" event={"ID":"332c9a2e-4daa-4bc4-8020-1938abeccb55","Type":"ContainerDied","Data":"ecdc2bacd4a0eab2160dc65d088b78fc7ac94aab6caf326d0ed9221c60f18c25"} Mar 09 18:38:43 crc kubenswrapper[4821]: I0309 18:38:43.144035 4821 generic.go:334] "Generic (PLEG): container finished" podID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerID="03d110b2c0b0f2cd9bb727347041eb5e48f708d6accef5a192d7d864aec9c784" exitCode=0 Mar 09 18:38:43 crc kubenswrapper[4821]: I0309 18:38:43.144078 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" event={"ID":"332c9a2e-4daa-4bc4-8020-1938abeccb55","Type":"ContainerDied","Data":"03d110b2c0b0f2cd9bb727347041eb5e48f708d6accef5a192d7d864aec9c784"} Mar 09 18:38:43 crc kubenswrapper[4821]: I0309 18:38:43.920990 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-7q47l" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.208118 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pxc5m" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.436021 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.514941 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs6hf\" (UniqueName: \"kubernetes.io/projected/332c9a2e-4daa-4bc4-8020-1938abeccb55-kube-api-access-rs6hf\") pod \"332c9a2e-4daa-4bc4-8020-1938abeccb55\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.515076 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-util\") pod \"332c9a2e-4daa-4bc4-8020-1938abeccb55\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.515142 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-bundle\") pod \"332c9a2e-4daa-4bc4-8020-1938abeccb55\" (UID: \"332c9a2e-4daa-4bc4-8020-1938abeccb55\") " Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.516007 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-bundle" (OuterVolumeSpecName: "bundle") pod "332c9a2e-4daa-4bc4-8020-1938abeccb55" (UID: "332c9a2e-4daa-4bc4-8020-1938abeccb55"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.519794 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332c9a2e-4daa-4bc4-8020-1938abeccb55-kube-api-access-rs6hf" (OuterVolumeSpecName: "kube-api-access-rs6hf") pod "332c9a2e-4daa-4bc4-8020-1938abeccb55" (UID: "332c9a2e-4daa-4bc4-8020-1938abeccb55"). InnerVolumeSpecName "kube-api-access-rs6hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.529293 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-util" (OuterVolumeSpecName: "util") pod "332c9a2e-4daa-4bc4-8020-1938abeccb55" (UID: "332c9a2e-4daa-4bc4-8020-1938abeccb55"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.617675 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-util\") on node \"crc\" DevicePath \"\"" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.618026 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/332c9a2e-4daa-4bc4-8020-1938abeccb55-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:38:44 crc kubenswrapper[4821]: I0309 18:38:44.618211 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs6hf\" (UniqueName: \"kubernetes.io/projected/332c9a2e-4daa-4bc4-8020-1938abeccb55-kube-api-access-rs6hf\") on node \"crc\" DevicePath \"\"" Mar 09 18:38:45 crc kubenswrapper[4821]: I0309 18:38:45.166839 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" event={"ID":"332c9a2e-4daa-4bc4-8020-1938abeccb55","Type":"ContainerDied","Data":"4791bc0fc9dae50b99a18e11cf0a71d944cf73147877149717e83a7203295ca5"} Mar 09 18:38:45 crc kubenswrapper[4821]: I0309 18:38:45.167192 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4791bc0fc9dae50b99a18e11cf0a71d944cf73147877149717e83a7203295ca5" Mar 09 18:38:45 crc kubenswrapper[4821]: I0309 18:38:45.166976 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.916989 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb"] Mar 09 18:38:49 crc kubenswrapper[4821]: E0309 18:38:49.917528 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="extract" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.917541 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="extract" Mar 09 18:38:49 crc kubenswrapper[4821]: E0309 18:38:49.917553 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="pull" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.917559 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="pull" Mar 09 18:38:49 crc kubenswrapper[4821]: E0309 18:38:49.917565 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="util" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.917572 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="util" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.917698 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="332c9a2e-4daa-4bc4-8020-1938abeccb55" containerName="extract" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.918141 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.921163 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.921171 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-k856p" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.921559 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.936003 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb"] Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.993287 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce95d0cc-b615-4635-ba32-5a652793187b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fzzrb\" (UID: \"ce95d0cc-b615-4635-ba32-5a652793187b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:49 crc kubenswrapper[4821]: I0309 18:38:49.993450 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6q22\" (UniqueName: \"kubernetes.io/projected/ce95d0cc-b615-4635-ba32-5a652793187b-kube-api-access-z6q22\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fzzrb\" (UID: \"ce95d0cc-b615-4635-ba32-5a652793187b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:50 crc kubenswrapper[4821]: I0309 18:38:50.095165 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce95d0cc-b615-4635-ba32-5a652793187b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fzzrb\" (UID: \"ce95d0cc-b615-4635-ba32-5a652793187b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:50 crc kubenswrapper[4821]: I0309 18:38:50.095237 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6q22\" (UniqueName: \"kubernetes.io/projected/ce95d0cc-b615-4635-ba32-5a652793187b-kube-api-access-z6q22\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fzzrb\" (UID: \"ce95d0cc-b615-4635-ba32-5a652793187b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:50 crc kubenswrapper[4821]: I0309 18:38:50.095915 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ce95d0cc-b615-4635-ba32-5a652793187b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fzzrb\" (UID: \"ce95d0cc-b615-4635-ba32-5a652793187b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:50 crc kubenswrapper[4821]: I0309 18:38:50.115796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6q22\" (UniqueName: \"kubernetes.io/projected/ce95d0cc-b615-4635-ba32-5a652793187b-kube-api-access-z6q22\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fzzrb\" (UID: \"ce95d0cc-b615-4635-ba32-5a652793187b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:50 crc kubenswrapper[4821]: I0309 18:38:50.236731 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" Mar 09 18:38:50 crc kubenswrapper[4821]: I0309 18:38:50.706272 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb"] Mar 09 18:38:50 crc kubenswrapper[4821]: W0309 18:38:50.707185 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce95d0cc_b615_4635_ba32_5a652793187b.slice/crio-c5f1b7364003e174d1eba2a2d8d3ae8fc720856bcff52d0d10ed2814067baa87 WatchSource:0}: Error finding container c5f1b7364003e174d1eba2a2d8d3ae8fc720856bcff52d0d10ed2814067baa87: Status 404 returned error can't find the container with id c5f1b7364003e174d1eba2a2d8d3ae8fc720856bcff52d0d10ed2814067baa87 Mar 09 18:38:51 crc kubenswrapper[4821]: I0309 18:38:51.227957 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" event={"ID":"ce95d0cc-b615-4635-ba32-5a652793187b","Type":"ContainerStarted","Data":"c5f1b7364003e174d1eba2a2d8d3ae8fc720856bcff52d0d10ed2814067baa87"} Mar 09 18:38:54 crc kubenswrapper[4821]: I0309 18:38:54.250190 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" event={"ID":"ce95d0cc-b615-4635-ba32-5a652793187b","Type":"ContainerStarted","Data":"f6398e6015e7ebfe4c762143c360e7bd1bd5509afb3c7749798e957e0b5c2aee"} Mar 09 18:38:54 crc kubenswrapper[4821]: I0309 18:38:54.282238 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fzzrb" podStartSLOduration=2.3126291500000002 podStartE2EDuration="5.282211058s" podCreationTimestamp="2026-03-09 18:38:49 +0000 UTC" firstStartedPulling="2026-03-09 18:38:50.710114336 +0000 UTC m=+867.871490192" lastFinishedPulling="2026-03-09 18:38:53.679696244 +0000 UTC m=+870.841072100" observedRunningTime="2026-03-09 18:38:54.276490992 +0000 UTC m=+871.437866878" watchObservedRunningTime="2026-03-09 18:38:54.282211058 +0000 UTC m=+871.443586954" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.114883 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8k2s6"] Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.116096 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.128737 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.129367 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6tstw" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.130766 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.134453 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8k2s6"] Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.210460 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzptd\" (UniqueName: \"kubernetes.io/projected/56627852-72af-4929-a17f-29e6675fdbfc-kube-api-access-vzptd\") pod \"cert-manager-webhook-6888856db4-8k2s6\" (UID: \"56627852-72af-4929-a17f-29e6675fdbfc\") " pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.210587 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56627852-72af-4929-a17f-29e6675fdbfc-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8k2s6\" (UID: \"56627852-72af-4929-a17f-29e6675fdbfc\") " pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.311943 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56627852-72af-4929-a17f-29e6675fdbfc-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8k2s6\" (UID: \"56627852-72af-4929-a17f-29e6675fdbfc\") " pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.312019 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzptd\" (UniqueName: \"kubernetes.io/projected/56627852-72af-4929-a17f-29e6675fdbfc-kube-api-access-vzptd\") pod \"cert-manager-webhook-6888856db4-8k2s6\" (UID: \"56627852-72af-4929-a17f-29e6675fdbfc\") " pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.336821 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56627852-72af-4929-a17f-29e6675fdbfc-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8k2s6\" (UID: \"56627852-72af-4929-a17f-29e6675fdbfc\") " pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.347810 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzptd\" (UniqueName: \"kubernetes.io/projected/56627852-72af-4929-a17f-29e6675fdbfc-kube-api-access-vzptd\") pod \"cert-manager-webhook-6888856db4-8k2s6\" (UID: \"56627852-72af-4929-a17f-29e6675fdbfc\") " pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.431545 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:38:58 crc kubenswrapper[4821]: I0309 18:38:58.856116 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8k2s6"] Mar 09 18:38:59 crc kubenswrapper[4821]: I0309 18:38:59.287300 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" event={"ID":"56627852-72af-4929-a17f-29e6675fdbfc","Type":"ContainerStarted","Data":"f7a8910f6f0b0c658d28bf76ccf2fe48aab64605417f166179a95319590303eb"} Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.641554 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2t9gn"] Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.642819 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.645621 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bmgqm" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.657589 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2t9gn"] Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.743844 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4wvm\" (UniqueName: \"kubernetes.io/projected/2840cece-7d09-420e-8c47-85417d8032a9-kube-api-access-z4wvm\") pod \"cert-manager-cainjector-5545bd876-2t9gn\" (UID: \"2840cece-7d09-420e-8c47-85417d8032a9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.743916 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2840cece-7d09-420e-8c47-85417d8032a9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2t9gn\" (UID: \"2840cece-7d09-420e-8c47-85417d8032a9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.846076 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2840cece-7d09-420e-8c47-85417d8032a9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2t9gn\" (UID: \"2840cece-7d09-420e-8c47-85417d8032a9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.846262 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4wvm\" (UniqueName: \"kubernetes.io/projected/2840cece-7d09-420e-8c47-85417d8032a9-kube-api-access-z4wvm\") pod \"cert-manager-cainjector-5545bd876-2t9gn\" (UID: \"2840cece-7d09-420e-8c47-85417d8032a9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.866167 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2840cece-7d09-420e-8c47-85417d8032a9-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2t9gn\" (UID: \"2840cece-7d09-420e-8c47-85417d8032a9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.889610 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4wvm\" (UniqueName: \"kubernetes.io/projected/2840cece-7d09-420e-8c47-85417d8032a9-kube-api-access-z4wvm\") pod \"cert-manager-cainjector-5545bd876-2t9gn\" (UID: \"2840cece-7d09-420e-8c47-85417d8032a9\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:00 crc kubenswrapper[4821]: I0309 18:39:00.968540 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" Mar 09 18:39:01 crc kubenswrapper[4821]: W0309 18:39:01.251213 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2840cece_7d09_420e_8c47_85417d8032a9.slice/crio-147c7115693eb33cf2242ead1291687e4754dc2a9e021b326e9249701eb20b9f WatchSource:0}: Error finding container 147c7115693eb33cf2242ead1291687e4754dc2a9e021b326e9249701eb20b9f: Status 404 returned error can't find the container with id 147c7115693eb33cf2242ead1291687e4754dc2a9e021b326e9249701eb20b9f Mar 09 18:39:01 crc kubenswrapper[4821]: I0309 18:39:01.255448 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2t9gn"] Mar 09 18:39:01 crc kubenswrapper[4821]: I0309 18:39:01.257111 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 18:39:01 crc kubenswrapper[4821]: I0309 18:39:01.303191 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" event={"ID":"2840cece-7d09-420e-8c47-85417d8032a9","Type":"ContainerStarted","Data":"147c7115693eb33cf2242ead1291687e4754dc2a9e021b326e9249701eb20b9f"} Mar 09 18:39:04 crc kubenswrapper[4821]: I0309 18:39:04.338788 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" event={"ID":"2840cece-7d09-420e-8c47-85417d8032a9","Type":"ContainerStarted","Data":"a4de448ae7ec27b152ce985032a6a476fe268868974ea67e9f871edd9023e6c9"} Mar 09 18:39:04 crc kubenswrapper[4821]: I0309 18:39:04.341058 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" event={"ID":"56627852-72af-4929-a17f-29e6675fdbfc","Type":"ContainerStarted","Data":"24b860270d739dd376a5fb0009d7e6c9c57c4af94df9aabd248d87de86207c58"} Mar 09 18:39:04 crc kubenswrapper[4821]: I0309 18:39:04.341211 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:39:04 crc kubenswrapper[4821]: I0309 18:39:04.358220 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-2t9gn" podStartSLOduration=1.942367366 podStartE2EDuration="4.35819906s" podCreationTimestamp="2026-03-09 18:39:00 +0000 UTC" firstStartedPulling="2026-03-09 18:39:01.255999336 +0000 UTC m=+878.417375192" lastFinishedPulling="2026-03-09 18:39:03.67183103 +0000 UTC m=+880.833206886" observedRunningTime="2026-03-09 18:39:04.357859941 +0000 UTC m=+881.519235797" watchObservedRunningTime="2026-03-09 18:39:04.35819906 +0000 UTC m=+881.519574926" Mar 09 18:39:04 crc kubenswrapper[4821]: I0309 18:39:04.384508 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" podStartSLOduration=1.563257154 podStartE2EDuration="6.384483136s" podCreationTimestamp="2026-03-09 18:38:58 +0000 UTC" firstStartedPulling="2026-03-09 18:38:58.865381921 +0000 UTC m=+876.026757817" lastFinishedPulling="2026-03-09 18:39:03.686607943 +0000 UTC m=+880.847983799" observedRunningTime="2026-03-09 18:39:04.379343546 +0000 UTC m=+881.540719412" watchObservedRunningTime="2026-03-09 18:39:04.384483136 +0000 UTC m=+881.545859012" Mar 09 18:39:08 crc kubenswrapper[4821]: I0309 18:39:08.434805 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8k2s6" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.049056 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-zmpfr"] Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.050218 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.052684 4821 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-strhd" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.062777 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zmpfr"] Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.182648 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9cd2b98-2171-4c11-abb5-a0e3db0a69d5-bound-sa-token\") pod \"cert-manager-545d4d4674-zmpfr\" (UID: \"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5\") " pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.182775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qp56\" (UniqueName: \"kubernetes.io/projected/c9cd2b98-2171-4c11-abb5-a0e3db0a69d5-kube-api-access-7qp56\") pod \"cert-manager-545d4d4674-zmpfr\" (UID: \"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5\") " pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.283592 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9cd2b98-2171-4c11-abb5-a0e3db0a69d5-bound-sa-token\") pod \"cert-manager-545d4d4674-zmpfr\" (UID: \"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5\") " pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.283707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qp56\" (UniqueName: \"kubernetes.io/projected/c9cd2b98-2171-4c11-abb5-a0e3db0a69d5-kube-api-access-7qp56\") pod \"cert-manager-545d4d4674-zmpfr\" (UID: \"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5\") " pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.301870 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qp56\" (UniqueName: \"kubernetes.io/projected/c9cd2b98-2171-4c11-abb5-a0e3db0a69d5-kube-api-access-7qp56\") pod \"cert-manager-545d4d4674-zmpfr\" (UID: \"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5\") " pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.302235 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9cd2b98-2171-4c11-abb5-a0e3db0a69d5-bound-sa-token\") pod \"cert-manager-545d4d4674-zmpfr\" (UID: \"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5\") " pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.385332 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-zmpfr" Mar 09 18:39:16 crc kubenswrapper[4821]: I0309 18:39:16.796158 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-zmpfr"] Mar 09 18:39:17 crc kubenswrapper[4821]: I0309 18:39:17.458421 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zmpfr" event={"ID":"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5","Type":"ContainerStarted","Data":"1063f02448670cc5f6a070188e81ae1d037f417949ae317ab03b601c84b442d9"} Mar 09 18:39:17 crc kubenswrapper[4821]: I0309 18:39:17.458810 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-zmpfr" event={"ID":"c9cd2b98-2171-4c11-abb5-a0e3db0a69d5","Type":"ContainerStarted","Data":"c1c0f8a713c2cd79d4f8b49ac3605bb26cf64ff4032dbb4c5e7e0786b6034d48"} Mar 09 18:39:17 crc kubenswrapper[4821]: I0309 18:39:17.489662 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-zmpfr" podStartSLOduration=1.48963561 podStartE2EDuration="1.48963561s" podCreationTimestamp="2026-03-09 18:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:39:17.483080331 +0000 UTC m=+894.644456227" watchObservedRunningTime="2026-03-09 18:39:17.48963561 +0000 UTC m=+894.651011486" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.685987 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sslnq"] Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.687243 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.690038 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ffjb4" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.690845 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.691560 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.736142 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sslnq"] Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.863531 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5lr\" (UniqueName: \"kubernetes.io/projected/3de41d2d-b4c7-49a4-84d7-630b601a72dd-kube-api-access-4p5lr\") pod \"openstack-operator-index-sslnq\" (UID: \"3de41d2d-b4c7-49a4-84d7-630b601a72dd\") " pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.965084 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5lr\" (UniqueName: \"kubernetes.io/projected/3de41d2d-b4c7-49a4-84d7-630b601a72dd-kube-api-access-4p5lr\") pod \"openstack-operator-index-sslnq\" (UID: \"3de41d2d-b4c7-49a4-84d7-630b601a72dd\") " pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:21 crc kubenswrapper[4821]: I0309 18:39:21.982833 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5lr\" (UniqueName: \"kubernetes.io/projected/3de41d2d-b4c7-49a4-84d7-630b601a72dd-kube-api-access-4p5lr\") pod \"openstack-operator-index-sslnq\" (UID: \"3de41d2d-b4c7-49a4-84d7-630b601a72dd\") " pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:22 crc kubenswrapper[4821]: I0309 18:39:22.033888 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:22 crc kubenswrapper[4821]: I0309 18:39:22.243452 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sslnq"] Mar 09 18:39:22 crc kubenswrapper[4821]: I0309 18:39:22.492724 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sslnq" event={"ID":"3de41d2d-b4c7-49a4-84d7-630b601a72dd","Type":"ContainerStarted","Data":"8ab6567bebaab0d4161e395983cca792cd905ed36a4048662f82a1b7cf67da63"} Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.067192 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sslnq"] Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.519764 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sslnq" event={"ID":"3de41d2d-b4c7-49a4-84d7-630b601a72dd","Type":"ContainerStarted","Data":"060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db"} Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.546847 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sslnq" podStartSLOduration=2.028215415 podStartE2EDuration="4.54680949s" podCreationTimestamp="2026-03-09 18:39:21 +0000 UTC" firstStartedPulling="2026-03-09 18:39:22.251152034 +0000 UTC m=+899.412527890" lastFinishedPulling="2026-03-09 18:39:24.769746109 +0000 UTC m=+901.931121965" observedRunningTime="2026-03-09 18:39:25.539646606 +0000 UTC m=+902.701022502" watchObservedRunningTime="2026-03-09 18:39:25.54680949 +0000 UTC m=+902.708185386" Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.675613 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cjjbr"] Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.676598 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.690289 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cjjbr"] Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.727185 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tvbr\" (UniqueName: \"kubernetes.io/projected/43193e76-c853-4bc6-89e4-12ff09c8fbcb-kube-api-access-8tvbr\") pod \"openstack-operator-index-cjjbr\" (UID: \"43193e76-c853-4bc6-89e4-12ff09c8fbcb\") " pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.828447 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tvbr\" (UniqueName: \"kubernetes.io/projected/43193e76-c853-4bc6-89e4-12ff09c8fbcb-kube-api-access-8tvbr\") pod \"openstack-operator-index-cjjbr\" (UID: \"43193e76-c853-4bc6-89e4-12ff09c8fbcb\") " pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:25 crc kubenswrapper[4821]: I0309 18:39:25.851243 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tvbr\" (UniqueName: \"kubernetes.io/projected/43193e76-c853-4bc6-89e4-12ff09c8fbcb-kube-api-access-8tvbr\") pod \"openstack-operator-index-cjjbr\" (UID: \"43193e76-c853-4bc6-89e4-12ff09c8fbcb\") " pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.008489 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.488548 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cjjbr"] Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.528791 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cjjbr" event={"ID":"43193e76-c853-4bc6-89e4-12ff09c8fbcb","Type":"ContainerStarted","Data":"fd7ddd68d182152b0687050950c3462b5d8ce461a603117ff3baa180cc3d4ba3"} Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.528864 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sslnq" podUID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" containerName="registry-server" containerID="cri-o://060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db" gracePeriod=2 Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.921796 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.969626 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p5lr\" (UniqueName: \"kubernetes.io/projected/3de41d2d-b4c7-49a4-84d7-630b601a72dd-kube-api-access-4p5lr\") pod \"3de41d2d-b4c7-49a4-84d7-630b601a72dd\" (UID: \"3de41d2d-b4c7-49a4-84d7-630b601a72dd\") " Mar 09 18:39:26 crc kubenswrapper[4821]: I0309 18:39:26.977146 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de41d2d-b4c7-49a4-84d7-630b601a72dd-kube-api-access-4p5lr" (OuterVolumeSpecName: "kube-api-access-4p5lr") pod "3de41d2d-b4c7-49a4-84d7-630b601a72dd" (UID: "3de41d2d-b4c7-49a4-84d7-630b601a72dd"). InnerVolumeSpecName "kube-api-access-4p5lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.071449 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p5lr\" (UniqueName: \"kubernetes.io/projected/3de41d2d-b4c7-49a4-84d7-630b601a72dd-kube-api-access-4p5lr\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.555045 4821 generic.go:334] "Generic (PLEG): container finished" podID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" containerID="060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db" exitCode=0 Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.555230 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.570587 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sslnq" event={"ID":"3de41d2d-b4c7-49a4-84d7-630b601a72dd","Type":"ContainerDied","Data":"060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db"} Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.570659 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sslnq" event={"ID":"3de41d2d-b4c7-49a4-84d7-630b601a72dd","Type":"ContainerDied","Data":"8ab6567bebaab0d4161e395983cca792cd905ed36a4048662f82a1b7cf67da63"} Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.570692 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cjjbr" event={"ID":"43193e76-c853-4bc6-89e4-12ff09c8fbcb","Type":"ContainerStarted","Data":"254d4ec5cf24f338c2ed9f06df87fb0a98773c48edce52691dbdd2d3fb37c64b"} Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.570746 4821 scope.go:117] "RemoveContainer" containerID="060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db" Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.632760 4821 scope.go:117] "RemoveContainer" containerID="060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db" Mar 09 18:39:27 crc kubenswrapper[4821]: E0309 18:39:27.636094 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db\": container with ID starting with 060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db not found: ID does not exist" containerID="060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db" Mar 09 18:39:27 crc kubenswrapper[4821]: I0309 18:39:27.636171 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db"} err="failed to get container status \"060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db\": rpc error: code = NotFound desc = could not find container \"060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db\": container with ID starting with 060460be62db5911e8e2e0fdc523ca903fd3d85cd7e47a6f500cdad79cecc2db not found: ID does not exist" Mar 09 18:39:29 crc kubenswrapper[4821]: I0309 18:39:29.914179 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:39:29 crc kubenswrapper[4821]: I0309 18:39:29.914597 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:39:36 crc kubenswrapper[4821]: I0309 18:39:36.009141 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:36 crc kubenswrapper[4821]: I0309 18:39:36.009576 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:36 crc kubenswrapper[4821]: I0309 18:39:36.036727 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:36 crc kubenswrapper[4821]: I0309 18:39:36.053519 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cjjbr" podStartSLOduration=10.999737939 podStartE2EDuration="11.053495834s" podCreationTimestamp="2026-03-09 18:39:25 +0000 UTC" firstStartedPulling="2026-03-09 18:39:26.501806743 +0000 UTC m=+903.663182599" lastFinishedPulling="2026-03-09 18:39:26.555564598 +0000 UTC m=+903.716940494" observedRunningTime="2026-03-09 18:39:27.58720265 +0000 UTC m=+904.748578586" watchObservedRunningTime="2026-03-09 18:39:36.053495834 +0000 UTC m=+913.214871710" Mar 09 18:39:36 crc kubenswrapper[4821]: I0309 18:39:36.674468 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cjjbr" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.345051 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm"] Mar 09 18:39:38 crc kubenswrapper[4821]: E0309 18:39:38.345765 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" containerName="registry-server" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.345784 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" containerName="registry-server" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.345973 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" containerName="registry-server" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.347413 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.352452 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm"] Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.353504 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-thvms" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.447677 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg6l4\" (UniqueName: \"kubernetes.io/projected/7022fc4e-6faf-4abb-9677-963728a8d91d-kube-api-access-dg6l4\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.447744 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-util\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.447771 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-bundle\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.548985 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg6l4\" (UniqueName: \"kubernetes.io/projected/7022fc4e-6faf-4abb-9677-963728a8d91d-kube-api-access-dg6l4\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.549091 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-util\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.549127 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-bundle\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.549594 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-util\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.549677 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-bundle\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.581601 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg6l4\" (UniqueName: \"kubernetes.io/projected/7022fc4e-6faf-4abb-9677-963728a8d91d-kube-api-access-dg6l4\") pod \"5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.675532 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:38 crc kubenswrapper[4821]: I0309 18:39:38.964313 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm"] Mar 09 18:39:39 crc kubenswrapper[4821]: I0309 18:39:39.653877 4821 generic.go:334] "Generic (PLEG): container finished" podID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerID="bdeb504fb2df8892773f0ed06afcdafee7dd459b65b6d3b573e7c4dd9f4e8d3b" exitCode=0 Mar 09 18:39:39 crc kubenswrapper[4821]: I0309 18:39:39.653992 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" event={"ID":"7022fc4e-6faf-4abb-9677-963728a8d91d","Type":"ContainerDied","Data":"bdeb504fb2df8892773f0ed06afcdafee7dd459b65b6d3b573e7c4dd9f4e8d3b"} Mar 09 18:39:39 crc kubenswrapper[4821]: I0309 18:39:39.654221 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" event={"ID":"7022fc4e-6faf-4abb-9677-963728a8d91d","Type":"ContainerStarted","Data":"49f2ab9a41ac2d28721ca871ddcaf42aaba20c66054991c9b9daeaa74be8c930"} Mar 09 18:39:40 crc kubenswrapper[4821]: I0309 18:39:40.665679 4821 generic.go:334] "Generic (PLEG): container finished" podID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerID="fad6ae44c98d48ced2c95c9865c232cb50a5d98238288f74ef89d929b541acd0" exitCode=0 Mar 09 18:39:40 crc kubenswrapper[4821]: I0309 18:39:40.665777 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" event={"ID":"7022fc4e-6faf-4abb-9677-963728a8d91d","Type":"ContainerDied","Data":"fad6ae44c98d48ced2c95c9865c232cb50a5d98238288f74ef89d929b541acd0"} Mar 09 18:39:41 crc kubenswrapper[4821]: I0309 18:39:41.675745 4821 generic.go:334] "Generic (PLEG): container finished" podID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerID="dd5c4f8e0e6db4b3cfc45d4688400aa8178c46b330c2bb9dc44aeaf60712785c" exitCode=0 Mar 09 18:39:41 crc kubenswrapper[4821]: I0309 18:39:41.675824 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" event={"ID":"7022fc4e-6faf-4abb-9677-963728a8d91d","Type":"ContainerDied","Data":"dd5c4f8e0e6db4b3cfc45d4688400aa8178c46b330c2bb9dc44aeaf60712785c"} Mar 09 18:39:42 crc kubenswrapper[4821]: I0309 18:39:42.974283 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.112503 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-bundle\") pod \"7022fc4e-6faf-4abb-9677-963728a8d91d\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.112587 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-util\") pod \"7022fc4e-6faf-4abb-9677-963728a8d91d\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.112615 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg6l4\" (UniqueName: \"kubernetes.io/projected/7022fc4e-6faf-4abb-9677-963728a8d91d-kube-api-access-dg6l4\") pod \"7022fc4e-6faf-4abb-9677-963728a8d91d\" (UID: \"7022fc4e-6faf-4abb-9677-963728a8d91d\") " Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.113149 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-bundle" (OuterVolumeSpecName: "bundle") pod "7022fc4e-6faf-4abb-9677-963728a8d91d" (UID: "7022fc4e-6faf-4abb-9677-963728a8d91d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.118227 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7022fc4e-6faf-4abb-9677-963728a8d91d-kube-api-access-dg6l4" (OuterVolumeSpecName: "kube-api-access-dg6l4") pod "7022fc4e-6faf-4abb-9677-963728a8d91d" (UID: "7022fc4e-6faf-4abb-9677-963728a8d91d"). InnerVolumeSpecName "kube-api-access-dg6l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.127596 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-util" (OuterVolumeSpecName: "util") pod "7022fc4e-6faf-4abb-9677-963728a8d91d" (UID: "7022fc4e-6faf-4abb-9677-963728a8d91d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.213872 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-util\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.213904 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg6l4\" (UniqueName: \"kubernetes.io/projected/7022fc4e-6faf-4abb-9677-963728a8d91d-kube-api-access-dg6l4\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.213914 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7022fc4e-6faf-4abb-9677-963728a8d91d-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.692034 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" event={"ID":"7022fc4e-6faf-4abb-9677-963728a8d91d","Type":"ContainerDied","Data":"49f2ab9a41ac2d28721ca871ddcaf42aaba20c66054991c9b9daeaa74be8c930"} Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.692080 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f2ab9a41ac2d28721ca871ddcaf42aaba20c66054991c9b9daeaa74be8c930" Mar 09 18:39:43 crc kubenswrapper[4821]: I0309 18:39:43.692138 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.079090 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-88wqz"] Mar 09 18:39:44 crc kubenswrapper[4821]: E0309 18:39:44.079515 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="extract" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.079534 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="extract" Mar 09 18:39:44 crc kubenswrapper[4821]: E0309 18:39:44.079556 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="util" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.079562 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="util" Mar 09 18:39:44 crc kubenswrapper[4821]: E0309 18:39:44.079663 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="pull" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.079672 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="pull" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.080142 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7022fc4e-6faf-4abb-9677-963728a8d91d" containerName="extract" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.081643 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.086792 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88wqz"] Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.126131 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7h9\" (UniqueName: \"kubernetes.io/projected/55705a65-06ec-4560-b065-bed4712f0f77-kube-api-access-sp7h9\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.126507 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-catalog-content\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.126685 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-utilities\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.228565 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-catalog-content\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.228614 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-utilities\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.228655 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp7h9\" (UniqueName: \"kubernetes.io/projected/55705a65-06ec-4560-b065-bed4712f0f77-kube-api-access-sp7h9\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.229826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-catalog-content\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.229985 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-utilities\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.251055 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp7h9\" (UniqueName: \"kubernetes.io/projected/55705a65-06ec-4560-b065-bed4712f0f77-kube-api-access-sp7h9\") pod \"community-operators-88wqz\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.411465 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:44 crc kubenswrapper[4821]: I0309 18:39:44.888396 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88wqz"] Mar 09 18:39:44 crc kubenswrapper[4821]: W0309 18:39:44.893752 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55705a65_06ec_4560_b065_bed4712f0f77.slice/crio-c855da351d88b8985d45f14e238aef689b8b1e7a68110f656e5edcc257e3c9d6 WatchSource:0}: Error finding container c855da351d88b8985d45f14e238aef689b8b1e7a68110f656e5edcc257e3c9d6: Status 404 returned error can't find the container with id c855da351d88b8985d45f14e238aef689b8b1e7a68110f656e5edcc257e3c9d6 Mar 09 18:39:45 crc kubenswrapper[4821]: I0309 18:39:45.705724 4821 generic.go:334] "Generic (PLEG): container finished" podID="55705a65-06ec-4560-b065-bed4712f0f77" containerID="31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a" exitCode=0 Mar 09 18:39:45 crc kubenswrapper[4821]: I0309 18:39:45.705812 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerDied","Data":"31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a"} Mar 09 18:39:45 crc kubenswrapper[4821]: I0309 18:39:45.705998 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerStarted","Data":"c855da351d88b8985d45f14e238aef689b8b1e7a68110f656e5edcc257e3c9d6"} Mar 09 18:39:46 crc kubenswrapper[4821]: I0309 18:39:46.716230 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerStarted","Data":"62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6"} Mar 09 18:39:47 crc kubenswrapper[4821]: I0309 18:39:47.728016 4821 generic.go:334] "Generic (PLEG): container finished" podID="55705a65-06ec-4560-b065-bed4712f0f77" containerID="62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6" exitCode=0 Mar 09 18:39:47 crc kubenswrapper[4821]: I0309 18:39:47.728080 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerDied","Data":"62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6"} Mar 09 18:39:48 crc kubenswrapper[4821]: I0309 18:39:48.737961 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerStarted","Data":"61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac"} Mar 09 18:39:48 crc kubenswrapper[4821]: I0309 18:39:48.763855 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-88wqz" podStartSLOduration=2.35864403 podStartE2EDuration="4.763818683s" podCreationTimestamp="2026-03-09 18:39:44 +0000 UTC" firstStartedPulling="2026-03-09 18:39:45.708299672 +0000 UTC m=+922.869675558" lastFinishedPulling="2026-03-09 18:39:48.113474345 +0000 UTC m=+925.274850211" observedRunningTime="2026-03-09 18:39:48.758614211 +0000 UTC m=+925.919990077" watchObservedRunningTime="2026-03-09 18:39:48.763818683 +0000 UTC m=+925.925194539" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.265212 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n"] Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.266012 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.268516 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wms79" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.286454 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n"] Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.410471 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdbn\" (UniqueName: \"kubernetes.io/projected/788c9cbd-c8f4-4384-945d-991234c151fd-kube-api-access-dtdbn\") pod \"openstack-operator-controller-init-787cf98cf6-2h56n\" (UID: \"788c9cbd-c8f4-4384-945d-991234c151fd\") " pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.511660 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdbn\" (UniqueName: \"kubernetes.io/projected/788c9cbd-c8f4-4384-945d-991234c151fd-kube-api-access-dtdbn\") pod \"openstack-operator-controller-init-787cf98cf6-2h56n\" (UID: \"788c9cbd-c8f4-4384-945d-991234c151fd\") " pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.533563 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdbn\" (UniqueName: \"kubernetes.io/projected/788c9cbd-c8f4-4384-945d-991234c151fd-kube-api-access-dtdbn\") pod \"openstack-operator-controller-init-787cf98cf6-2h56n\" (UID: \"788c9cbd-c8f4-4384-945d-991234c151fd\") " pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.583336 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:49 crc kubenswrapper[4821]: I0309 18:39:49.912083 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n"] Mar 09 18:39:50 crc kubenswrapper[4821]: I0309 18:39:50.780043 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" event={"ID":"788c9cbd-c8f4-4384-945d-991234c151fd","Type":"ContainerStarted","Data":"ddde7de4558166c8c8ff53c8ef007706cfd433b39e5cd8fdcc0e9c6d60b57f9f"} Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.412658 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.413454 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.488265 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.813297 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" event={"ID":"788c9cbd-c8f4-4384-945d-991234c151fd","Type":"ContainerStarted","Data":"fd276bd5f449eb5bd7e7c1a28d97ee5f2d43b7bf25291d630b3675c9b832b9eb"} Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.813522 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.865120 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" podStartSLOduration=1.899686475 podStartE2EDuration="5.865102818s" podCreationTimestamp="2026-03-09 18:39:49 +0000 UTC" firstStartedPulling="2026-03-09 18:39:49.931126193 +0000 UTC m=+927.092502049" lastFinishedPulling="2026-03-09 18:39:53.896542536 +0000 UTC m=+931.057918392" observedRunningTime="2026-03-09 18:39:54.86259071 +0000 UTC m=+932.023966586" watchObservedRunningTime="2026-03-09 18:39:54.865102818 +0000 UTC m=+932.026478674" Mar 09 18:39:54 crc kubenswrapper[4821]: I0309 18:39:54.894393 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:56 crc kubenswrapper[4821]: I0309 18:39:56.870678 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88wqz"] Mar 09 18:39:56 crc kubenswrapper[4821]: I0309 18:39:56.871368 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-88wqz" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="registry-server" containerID="cri-o://61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac" gracePeriod=2 Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.308307 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.342590 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-catalog-content\") pod \"55705a65-06ec-4560-b065-bed4712f0f77\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.342715 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-utilities\") pod \"55705a65-06ec-4560-b065-bed4712f0f77\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.342898 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp7h9\" (UniqueName: \"kubernetes.io/projected/55705a65-06ec-4560-b065-bed4712f0f77-kube-api-access-sp7h9\") pod \"55705a65-06ec-4560-b065-bed4712f0f77\" (UID: \"55705a65-06ec-4560-b065-bed4712f0f77\") " Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.345887 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-utilities" (OuterVolumeSpecName: "utilities") pod "55705a65-06ec-4560-b065-bed4712f0f77" (UID: "55705a65-06ec-4560-b065-bed4712f0f77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.350974 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55705a65-06ec-4560-b065-bed4712f0f77-kube-api-access-sp7h9" (OuterVolumeSpecName: "kube-api-access-sp7h9") pod "55705a65-06ec-4560-b065-bed4712f0f77" (UID: "55705a65-06ec-4560-b065-bed4712f0f77"). InnerVolumeSpecName "kube-api-access-sp7h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.418305 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55705a65-06ec-4560-b065-bed4712f0f77" (UID: "55705a65-06ec-4560-b065-bed4712f0f77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.444719 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp7h9\" (UniqueName: \"kubernetes.io/projected/55705a65-06ec-4560-b065-bed4712f0f77-kube-api-access-sp7h9\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.444757 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.444769 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55705a65-06ec-4560-b065-bed4712f0f77-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.613825 4821 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod3de41d2d-b4c7-49a4-84d7-630b601a72dd"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod3de41d2d-b4c7-49a4-84d7-630b601a72dd] : Timed out while waiting for systemd to remove kubepods-burstable-pod3de41d2d_b4c7_49a4_84d7_630b601a72dd.slice" Mar 09 18:39:57 crc kubenswrapper[4821]: E0309 18:39:57.613884 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod3de41d2d-b4c7-49a4-84d7-630b601a72dd] : unable to destroy cgroup paths for cgroup [kubepods burstable pod3de41d2d-b4c7-49a4-84d7-630b601a72dd] : Timed out while waiting for systemd to remove kubepods-burstable-pod3de41d2d_b4c7_49a4_84d7_630b601a72dd.slice" pod="openstack-operators/openstack-operator-index-sslnq" podUID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.836209 4821 generic.go:334] "Generic (PLEG): container finished" podID="55705a65-06ec-4560-b065-bed4712f0f77" containerID="61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac" exitCode=0 Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.836266 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sslnq" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.836620 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88wqz" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.836934 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerDied","Data":"61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac"} Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.836962 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88wqz" event={"ID":"55705a65-06ec-4560-b065-bed4712f0f77","Type":"ContainerDied","Data":"c855da351d88b8985d45f14e238aef689b8b1e7a68110f656e5edcc257e3c9d6"} Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.836978 4821 scope.go:117] "RemoveContainer" containerID="61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac" Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.871102 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88wqz"] Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.876193 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-88wqz"] Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.881184 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sslnq"] Mar 09 18:39:57 crc kubenswrapper[4821]: I0309 18:39:57.885271 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sslnq"] Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.579102 4821 scope.go:117] "RemoveContainer" containerID="62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.613814 4821 scope.go:117] "RemoveContainer" containerID="31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.645219 4821 scope.go:117] "RemoveContainer" containerID="61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac" Mar 09 18:39:58 crc kubenswrapper[4821]: E0309 18:39:58.646778 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac\": container with ID starting with 61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac not found: ID does not exist" containerID="61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.646875 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac"} err="failed to get container status \"61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac\": rpc error: code = NotFound desc = could not find container \"61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac\": container with ID starting with 61583abd6a236ce0ffd4037626a19320e73acedb639022e4785a73a3c7bf7dac not found: ID does not exist" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.646921 4821 scope.go:117] "RemoveContainer" containerID="62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6" Mar 09 18:39:58 crc kubenswrapper[4821]: E0309 18:39:58.647765 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6\": container with ID starting with 62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6 not found: ID does not exist" containerID="62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.647811 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6"} err="failed to get container status \"62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6\": rpc error: code = NotFound desc = could not find container \"62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6\": container with ID starting with 62d4aa249968da95642af83f40eb6ba37f7379a81ae7d2012b62cc08881e52c6 not found: ID does not exist" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.647838 4821 scope.go:117] "RemoveContainer" containerID="31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a" Mar 09 18:39:58 crc kubenswrapper[4821]: E0309 18:39:58.648602 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a\": container with ID starting with 31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a not found: ID does not exist" containerID="31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a" Mar 09 18:39:58 crc kubenswrapper[4821]: I0309 18:39:58.648633 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a"} err="failed to get container status \"31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a\": rpc error: code = NotFound desc = could not find container \"31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a\": container with ID starting with 31764b40a1dc7addf08ed50438b5ae95cb50f0c8fc1e7804e290984d35be528a not found: ID does not exist" Mar 09 18:39:59 crc kubenswrapper[4821]: I0309 18:39:59.564304 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de41d2d-b4c7-49a4-84d7-630b601a72dd" path="/var/lib/kubelet/pods/3de41d2d-b4c7-49a4-84d7-630b601a72dd/volumes" Mar 09 18:39:59 crc kubenswrapper[4821]: I0309 18:39:59.565668 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55705a65-06ec-4560-b065-bed4712f0f77" path="/var/lib/kubelet/pods/55705a65-06ec-4560-b065-bed4712f0f77/volumes" Mar 09 18:39:59 crc kubenswrapper[4821]: I0309 18:39:59.586786 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:39:59 crc kubenswrapper[4821]: I0309 18:39:59.914134 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:39:59 crc kubenswrapper[4821]: I0309 18:39:59.914193 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.139046 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551360-xr5rh"] Mar 09 18:40:00 crc kubenswrapper[4821]: E0309 18:40:00.139451 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="extract-utilities" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.139474 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="extract-utilities" Mar 09 18:40:00 crc kubenswrapper[4821]: E0309 18:40:00.139500 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="extract-content" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.139514 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="extract-content" Mar 09 18:40:00 crc kubenswrapper[4821]: E0309 18:40:00.139538 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="registry-server" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.139549 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="registry-server" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.139757 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="55705a65-06ec-4560-b065-bed4712f0f77" containerName="registry-server" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.140467 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.142838 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.142974 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.144820 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.146966 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551360-xr5rh"] Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.202702 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f52j\" (UniqueName: \"kubernetes.io/projected/85c2b643-431a-412d-9386-384fa8ccd6e9-kube-api-access-6f52j\") pod \"auto-csr-approver-29551360-xr5rh\" (UID: \"85c2b643-431a-412d-9386-384fa8ccd6e9\") " pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.306643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f52j\" (UniqueName: \"kubernetes.io/projected/85c2b643-431a-412d-9386-384fa8ccd6e9-kube-api-access-6f52j\") pod \"auto-csr-approver-29551360-xr5rh\" (UID: \"85c2b643-431a-412d-9386-384fa8ccd6e9\") " pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.343642 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f52j\" (UniqueName: \"kubernetes.io/projected/85c2b643-431a-412d-9386-384fa8ccd6e9-kube-api-access-6f52j\") pod \"auto-csr-approver-29551360-xr5rh\" (UID: \"85c2b643-431a-412d-9386-384fa8ccd6e9\") " pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.455130 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:00 crc kubenswrapper[4821]: I0309 18:40:00.979524 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551360-xr5rh"] Mar 09 18:40:00 crc kubenswrapper[4821]: W0309 18:40:00.989303 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85c2b643_431a_412d_9386_384fa8ccd6e9.slice/crio-2cb8dafbd433c18fd8eb3979347cdcf4201094004b131615bf24a17ee366676c WatchSource:0}: Error finding container 2cb8dafbd433c18fd8eb3979347cdcf4201094004b131615bf24a17ee366676c: Status 404 returned error can't find the container with id 2cb8dafbd433c18fd8eb3979347cdcf4201094004b131615bf24a17ee366676c Mar 09 18:40:01 crc kubenswrapper[4821]: I0309 18:40:01.874854 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" event={"ID":"85c2b643-431a-412d-9386-384fa8ccd6e9","Type":"ContainerStarted","Data":"2cb8dafbd433c18fd8eb3979347cdcf4201094004b131615bf24a17ee366676c"} Mar 09 18:40:02 crc kubenswrapper[4821]: I0309 18:40:02.885691 4821 generic.go:334] "Generic (PLEG): container finished" podID="85c2b643-431a-412d-9386-384fa8ccd6e9" containerID="601e2f61c235d09dd59cfe8d70f0a79bdc357ece3132a5c45ea484312327a91d" exitCode=0 Mar 09 18:40:02 crc kubenswrapper[4821]: I0309 18:40:02.885776 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" event={"ID":"85c2b643-431a-412d-9386-384fa8ccd6e9","Type":"ContainerDied","Data":"601e2f61c235d09dd59cfe8d70f0a79bdc357ece3132a5c45ea484312327a91d"} Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.169871 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.272459 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f52j\" (UniqueName: \"kubernetes.io/projected/85c2b643-431a-412d-9386-384fa8ccd6e9-kube-api-access-6f52j\") pod \"85c2b643-431a-412d-9386-384fa8ccd6e9\" (UID: \"85c2b643-431a-412d-9386-384fa8ccd6e9\") " Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.278557 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c2b643-431a-412d-9386-384fa8ccd6e9-kube-api-access-6f52j" (OuterVolumeSpecName: "kube-api-access-6f52j") pod "85c2b643-431a-412d-9386-384fa8ccd6e9" (UID: "85c2b643-431a-412d-9386-384fa8ccd6e9"). InnerVolumeSpecName "kube-api-access-6f52j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.373795 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f52j\" (UniqueName: \"kubernetes.io/projected/85c2b643-431a-412d-9386-384fa8ccd6e9-kube-api-access-6f52j\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.902870 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" event={"ID":"85c2b643-431a-412d-9386-384fa8ccd6e9","Type":"ContainerDied","Data":"2cb8dafbd433c18fd8eb3979347cdcf4201094004b131615bf24a17ee366676c"} Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.902929 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb8dafbd433c18fd8eb3979347cdcf4201094004b131615bf24a17ee366676c" Mar 09 18:40:04 crc kubenswrapper[4821]: I0309 18:40:04.902935 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551360-xr5rh" Mar 09 18:40:05 crc kubenswrapper[4821]: I0309 18:40:05.225573 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551354-vvfrb"] Mar 09 18:40:05 crc kubenswrapper[4821]: I0309 18:40:05.229811 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551354-vvfrb"] Mar 09 18:40:05 crc kubenswrapper[4821]: I0309 18:40:05.563217 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1240e366-1e5e-4d5e-9a11-fc281f0fd93b" path="/var/lib/kubelet/pods/1240e366-1e5e-4d5e-9a11-fc281f0fd93b/volumes" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.421362 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x557z"] Mar 09 18:40:06 crc kubenswrapper[4821]: E0309 18:40:06.421815 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c2b643-431a-412d-9386-384fa8ccd6e9" containerName="oc" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.421826 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c2b643-431a-412d-9386-384fa8ccd6e9" containerName="oc" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.421929 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c2b643-431a-412d-9386-384fa8ccd6e9" containerName="oc" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.422701 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.440743 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x557z"] Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.500133 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l8r7\" (UniqueName: \"kubernetes.io/projected/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-kube-api-access-6l8r7\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.500351 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-catalog-content\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.500454 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-utilities\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.601765 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l8r7\" (UniqueName: \"kubernetes.io/projected/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-kube-api-access-6l8r7\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.601829 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-catalog-content\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.601859 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-utilities\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.602595 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-catalog-content\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.602851 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-utilities\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.634400 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l8r7\" (UniqueName: \"kubernetes.io/projected/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-kube-api-access-6l8r7\") pod \"certified-operators-x557z\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:06 crc kubenswrapper[4821]: I0309 18:40:06.736782 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:07 crc kubenswrapper[4821]: I0309 18:40:07.232369 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x557z"] Mar 09 18:40:07 crc kubenswrapper[4821]: W0309 18:40:07.241284 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f95ff3a_6cad_4a3f_9b22_f7e265ec269c.slice/crio-e5c0cdb47da92769b4a07b285aeff8d4350f81e0d1d4b707cf6fd8288ef233d8 WatchSource:0}: Error finding container e5c0cdb47da92769b4a07b285aeff8d4350f81e0d1d4b707cf6fd8288ef233d8: Status 404 returned error can't find the container with id e5c0cdb47da92769b4a07b285aeff8d4350f81e0d1d4b707cf6fd8288ef233d8 Mar 09 18:40:07 crc kubenswrapper[4821]: I0309 18:40:07.923812 4821 generic.go:334] "Generic (PLEG): container finished" podID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerID="5baefcd06f404c0f51ed21006ff4decce43fc1ff5bea166cdf3c4ad0e991df48" exitCode=0 Mar 09 18:40:07 crc kubenswrapper[4821]: I0309 18:40:07.923856 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerDied","Data":"5baefcd06f404c0f51ed21006ff4decce43fc1ff5bea166cdf3c4ad0e991df48"} Mar 09 18:40:07 crc kubenswrapper[4821]: I0309 18:40:07.923912 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerStarted","Data":"e5c0cdb47da92769b4a07b285aeff8d4350f81e0d1d4b707cf6fd8288ef233d8"} Mar 09 18:40:08 crc kubenswrapper[4821]: I0309 18:40:08.932253 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerStarted","Data":"6b30cdbc0850ceb0bf9bfe97a99e1df29879895d3310cf2e9301488516f07122"} Mar 09 18:40:09 crc kubenswrapper[4821]: I0309 18:40:09.939900 4821 generic.go:334] "Generic (PLEG): container finished" podID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerID="6b30cdbc0850ceb0bf9bfe97a99e1df29879895d3310cf2e9301488516f07122" exitCode=0 Mar 09 18:40:09 crc kubenswrapper[4821]: I0309 18:40:09.939944 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerDied","Data":"6b30cdbc0850ceb0bf9bfe97a99e1df29879895d3310cf2e9301488516f07122"} Mar 09 18:40:10 crc kubenswrapper[4821]: I0309 18:40:10.950241 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerStarted","Data":"8de31724744317efb427df8d6ecd493bfdc1f409d5ea98e1d4fc2a870ef99ede"} Mar 09 18:40:10 crc kubenswrapper[4821]: I0309 18:40:10.987026 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x557z" podStartSLOduration=2.586225722 podStartE2EDuration="4.987000925s" podCreationTimestamp="2026-03-09 18:40:06 +0000 UTC" firstStartedPulling="2026-03-09 18:40:07.926876019 +0000 UTC m=+945.088251865" lastFinishedPulling="2026-03-09 18:40:10.327651212 +0000 UTC m=+947.489027068" observedRunningTime="2026-03-09 18:40:10.970452974 +0000 UTC m=+948.131828830" watchObservedRunningTime="2026-03-09 18:40:10.987000925 +0000 UTC m=+948.148376821" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.153058 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tshfs"] Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.154712 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.179910 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tshfs"] Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.209460 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-utilities\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.209533 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnpx8\" (UniqueName: \"kubernetes.io/projected/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-kube-api-access-lnpx8\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.209628 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-catalog-content\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.311295 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-catalog-content\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.311367 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-utilities\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.311399 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnpx8\" (UniqueName: \"kubernetes.io/projected/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-kube-api-access-lnpx8\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.311772 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-catalog-content\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.311959 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-utilities\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.328911 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnpx8\" (UniqueName: \"kubernetes.io/projected/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-kube-api-access-lnpx8\") pod \"redhat-marketplace-tshfs\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.472172 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.944345 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tshfs"] Mar 09 18:40:15 crc kubenswrapper[4821]: I0309 18:40:15.978620 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tshfs" event={"ID":"b4893fed-97e2-4ce7-99c7-cef6709e7cb7","Type":"ContainerStarted","Data":"042908ab760c9dfe451ec014c9a2b471ff1e55263b715ee52890f951395476f7"} Mar 09 18:40:16 crc kubenswrapper[4821]: I0309 18:40:16.737186 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:16 crc kubenswrapper[4821]: I0309 18:40:16.738585 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:16 crc kubenswrapper[4821]: I0309 18:40:16.776946 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:16 crc kubenswrapper[4821]: I0309 18:40:16.986864 4821 generic.go:334] "Generic (PLEG): container finished" podID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerID="585d2cfaaf9819e9d0af982d346bd44c7b533f51b7174af198a0966a4b6759f8" exitCode=0 Mar 09 18:40:16 crc kubenswrapper[4821]: I0309 18:40:16.987135 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tshfs" event={"ID":"b4893fed-97e2-4ce7-99c7-cef6709e7cb7","Type":"ContainerDied","Data":"585d2cfaaf9819e9d0af982d346bd44c7b533f51b7174af198a0966a4b6759f8"} Mar 09 18:40:17 crc kubenswrapper[4821]: I0309 18:40:17.050856 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:17 crc kubenswrapper[4821]: I0309 18:40:17.995999 4821 generic.go:334] "Generic (PLEG): container finished" podID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerID="4292b4a4299720203a2a8ca518986358218b6f4229c4990759d3bb99b044543c" exitCode=0 Mar 09 18:40:17 crc kubenswrapper[4821]: I0309 18:40:17.996089 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tshfs" event={"ID":"b4893fed-97e2-4ce7-99c7-cef6709e7cb7","Type":"ContainerDied","Data":"4292b4a4299720203a2a8ca518986358218b6f4229c4990759d3bb99b044543c"} Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.007110 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tshfs" event={"ID":"b4893fed-97e2-4ce7-99c7-cef6709e7cb7","Type":"ContainerStarted","Data":"5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7"} Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.049210 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tshfs" podStartSLOduration=2.6173839230000002 podStartE2EDuration="4.049190182s" podCreationTimestamp="2026-03-09 18:40:15 +0000 UTC" firstStartedPulling="2026-03-09 18:40:16.988832428 +0000 UTC m=+954.150208284" lastFinishedPulling="2026-03-09 18:40:18.420638687 +0000 UTC m=+955.582014543" observedRunningTime="2026-03-09 18:40:19.037002049 +0000 UTC m=+956.198377905" watchObservedRunningTime="2026-03-09 18:40:19.049190182 +0000 UTC m=+956.210566038" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.168521 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x557z"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.735701 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.736726 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.738405 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-98zst" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.750469 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.760544 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.761550 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.765679 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sz7qt" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.767805 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.768727 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.771341 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-v62c8" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.773068 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc82q\" (UniqueName: \"kubernetes.io/projected/9eb96ad1-a011-482f-bbdd-edfd673217b5-kube-api-access-kc82q\") pod \"barbican-operator-controller-manager-6db6876945-9vg4l\" (UID: \"9eb96ad1-a011-482f-bbdd-edfd673217b5\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.798153 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.799227 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.800312 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.803093 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4czmr" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.823296 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.852025 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.866114 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.867011 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.871186 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-bwkdn" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.871611 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.872288 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.873119 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5x2td" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.873810 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wwkf\" (UniqueName: \"kubernetes.io/projected/0a1af309-4a43-4d58-8912-abc1ed1e626a-kube-api-access-6wwkf\") pod \"cinder-operator-controller-manager-55d77d7b5c-cjvgb\" (UID: \"0a1af309-4a43-4d58-8912-abc1ed1e626a\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.873861 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc82q\" (UniqueName: \"kubernetes.io/projected/9eb96ad1-a011-482f-bbdd-edfd673217b5-kube-api-access-kc82q\") pod \"barbican-operator-controller-manager-6db6876945-9vg4l\" (UID: \"9eb96ad1-a011-482f-bbdd-edfd673217b5\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.873926 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxdb7\" (UniqueName: \"kubernetes.io/projected/7507717c-322f-43de-88ba-fc79b6a5a3f0-kube-api-access-bxdb7\") pod \"glance-operator-controller-manager-64db6967f8-hjztj\" (UID: \"7507717c-322f-43de-88ba-fc79b6a5a3f0\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.874085 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t767\" (UniqueName: \"kubernetes.io/projected/bb4823b7-c205-41c0-ba4d-d909ad9ff9cb-kube-api-access-7t767\") pod \"designate-operator-controller-manager-5d87c9d997-5rbnb\" (UID: \"bb4823b7-c205-41c0-ba4d-d909ad9ff9cb\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.877907 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.878863 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.884441 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.903858 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.906562 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6tt58" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.907043 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc82q\" (UniqueName: \"kubernetes.io/projected/9eb96ad1-a011-482f-bbdd-edfd673217b5-kube-api-access-kc82q\") pod \"barbican-operator-controller-manager-6db6876945-9vg4l\" (UID: \"9eb96ad1-a011-482f-bbdd-edfd673217b5\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.909027 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.920967 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.924862 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.926476 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jvvkc" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.962913 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.977446 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978441 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978473 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wwkf\" (UniqueName: \"kubernetes.io/projected/0a1af309-4a43-4d58-8912-abc1ed1e626a-kube-api-access-6wwkf\") pod \"cinder-operator-controller-manager-55d77d7b5c-cjvgb\" (UID: \"0a1af309-4a43-4d58-8912-abc1ed1e626a\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978516 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-844ng\" (UniqueName: \"kubernetes.io/projected/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-kube-api-access-844ng\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978552 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxdb7\" (UniqueName: \"kubernetes.io/projected/7507717c-322f-43de-88ba-fc79b6a5a3f0-kube-api-access-bxdb7\") pod \"glance-operator-controller-manager-64db6967f8-hjztj\" (UID: \"7507717c-322f-43de-88ba-fc79b6a5a3f0\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978570 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ql86\" (UniqueName: \"kubernetes.io/projected/772511ff-89ac-4190-8142-3bf3e4ef8423-kube-api-access-6ql86\") pod \"ironic-operator-controller-manager-545456dc4-wzvf8\" (UID: \"772511ff-89ac-4190-8142-3bf3e4ef8423\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978587 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gcl\" (UniqueName: \"kubernetes.io/projected/100889e4-2f00-4685-a5a7-6f9b73bb343f-kube-api-access-f8gcl\") pod \"horizon-operator-controller-manager-78bc7f9bd9-vhljc\" (UID: \"100889e4-2f00-4685-a5a7-6f9b73bb343f\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978603 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkbpk\" (UniqueName: \"kubernetes.io/projected/0b492a45-c917-4c00-abef-13abf40e71d1-kube-api-access-lkbpk\") pod \"heat-operator-controller-manager-cf99c678f-k4q8b\" (UID: \"0b492a45-c917-4c00-abef-13abf40e71d1\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.978639 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t767\" (UniqueName: \"kubernetes.io/projected/bb4823b7-c205-41c0-ba4d-d909ad9ff9cb-kube-api-access-7t767\") pod \"designate-operator-controller-manager-5d87c9d997-5rbnb\" (UID: \"bb4823b7-c205-41c0-ba4d-d909ad9ff9cb\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.995550 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs"] Mar 09 18:40:19 crc kubenswrapper[4821]: I0309 18:40:19.997258 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.005503 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxdb7\" (UniqueName: \"kubernetes.io/projected/7507717c-322f-43de-88ba-fc79b6a5a3f0-kube-api-access-bxdb7\") pod \"glance-operator-controller-manager-64db6967f8-hjztj\" (UID: \"7507717c-322f-43de-88ba-fc79b6a5a3f0\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.007801 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xxqsz" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.007980 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.008851 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.010526 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fbh5s" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.011721 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t767\" (UniqueName: \"kubernetes.io/projected/bb4823b7-c205-41c0-ba4d-d909ad9ff9cb-kube-api-access-7t767\") pod \"designate-operator-controller-manager-5d87c9d997-5rbnb\" (UID: \"bb4823b7-c205-41c0-ba4d-d909ad9ff9cb\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.018012 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x557z" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="registry-server" containerID="cri-o://8de31724744317efb427df8d6ecd493bfdc1f409d5ea98e1d4fc2a870ef99ede" gracePeriod=2 Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.027252 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wwkf\" (UniqueName: \"kubernetes.io/projected/0a1af309-4a43-4d58-8912-abc1ed1e626a-kube-api-access-6wwkf\") pod \"cinder-operator-controller-manager-55d77d7b5c-cjvgb\" (UID: \"0a1af309-4a43-4d58-8912-abc1ed1e626a\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.040227 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.048003 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.053049 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080014 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-844ng\" (UniqueName: \"kubernetes.io/projected/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-kube-api-access-844ng\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080074 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gflv\" (UniqueName: \"kubernetes.io/projected/d878ceb7-5af9-4a91-82cb-ed03b73f1b1d-kube-api-access-9gflv\") pod \"manila-operator-controller-manager-67d996989d-dhq9j\" (UID: \"d878ceb7-5af9-4a91-82cb-ed03b73f1b1d\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080104 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ql86\" (UniqueName: \"kubernetes.io/projected/772511ff-89ac-4190-8142-3bf3e4ef8423-kube-api-access-6ql86\") pod \"ironic-operator-controller-manager-545456dc4-wzvf8\" (UID: \"772511ff-89ac-4190-8142-3bf3e4ef8423\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080124 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gcl\" (UniqueName: \"kubernetes.io/projected/100889e4-2f00-4685-a5a7-6f9b73bb343f-kube-api-access-f8gcl\") pod \"horizon-operator-controller-manager-78bc7f9bd9-vhljc\" (UID: \"100889e4-2f00-4685-a5a7-6f9b73bb343f\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080142 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkbpk\" (UniqueName: \"kubernetes.io/projected/0b492a45-c917-4c00-abef-13abf40e71d1-kube-api-access-lkbpk\") pod \"heat-operator-controller-manager-cf99c678f-k4q8b\" (UID: \"0b492a45-c917-4c00-abef-13abf40e71d1\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080202 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w58sq\" (UniqueName: \"kubernetes.io/projected/71c62d87-8310-4ebd-8449-df18a56dc391-kube-api-access-w58sq\") pod \"keystone-operator-controller-manager-7c789f89c6-pc4fs\" (UID: \"71c62d87-8310-4ebd-8449-df18a56dc391\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080220 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.080357 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.080406 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert podName:c9d3c230-c74c-4cc4-af9f-f23fd5d9557c nodeName:}" failed. No retries permitted until 2026-03-09 18:40:20.580390921 +0000 UTC m=+957.741766777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert") pod "infra-operator-controller-manager-f7fcc58b9-rldv2" (UID: "c9d3c230-c74c-4cc4-af9f-f23fd5d9557c") : secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.080769 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.082010 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.083049 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.085937 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vmf8j" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.091395 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.106309 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkbpk\" (UniqueName: \"kubernetes.io/projected/0b492a45-c917-4c00-abef-13abf40e71d1-kube-api-access-lkbpk\") pod \"heat-operator-controller-manager-cf99c678f-k4q8b\" (UID: \"0b492a45-c917-4c00-abef-13abf40e71d1\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.107610 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-844ng\" (UniqueName: \"kubernetes.io/projected/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-kube-api-access-844ng\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.119055 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.119120 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ql86\" (UniqueName: \"kubernetes.io/projected/772511ff-89ac-4190-8142-3bf3e4ef8423-kube-api-access-6ql86\") pod \"ironic-operator-controller-manager-545456dc4-wzvf8\" (UID: \"772511ff-89ac-4190-8142-3bf3e4ef8423\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.119062 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gcl\" (UniqueName: \"kubernetes.io/projected/100889e4-2f00-4685-a5a7-6f9b73bb343f-kube-api-access-f8gcl\") pod \"horizon-operator-controller-manager-78bc7f9bd9-vhljc\" (UID: \"100889e4-2f00-4685-a5a7-6f9b73bb343f\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.140434 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.150903 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-974k8"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.152012 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.158702 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-974k8"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.164232 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gcnm2" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.182138 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gflv\" (UniqueName: \"kubernetes.io/projected/d878ceb7-5af9-4a91-82cb-ed03b73f1b1d-kube-api-access-9gflv\") pod \"manila-operator-controller-manager-67d996989d-dhq9j\" (UID: \"d878ceb7-5af9-4a91-82cb-ed03b73f1b1d\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.182210 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pw7\" (UniqueName: \"kubernetes.io/projected/b10f4933-a23d-4c0b-9834-40caa60b158c-kube-api-access-55pw7\") pod \"mariadb-operator-controller-manager-7b6bfb6475-9jwph\" (UID: \"b10f4933-a23d-4c0b-9834-40caa60b158c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.182285 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w58sq\" (UniqueName: \"kubernetes.io/projected/71c62d87-8310-4ebd-8449-df18a56dc391-kube-api-access-w58sq\") pod \"keystone-operator-controller-manager-7c789f89c6-pc4fs\" (UID: \"71c62d87-8310-4ebd-8449-df18a56dc391\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.182353 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksn97\" (UniqueName: \"kubernetes.io/projected/89a79a12-ce90-47f7-b0c4-c0976d7a4b1f-kube-api-access-ksn97\") pod \"neutron-operator-controller-manager-54688575f-974k8\" (UID: \"89a79a12-ce90-47f7-b0c4-c0976d7a4b1f\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.183618 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.185638 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.191090 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lw4qn" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.198071 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.206798 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.209118 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gflv\" (UniqueName: \"kubernetes.io/projected/d878ceb7-5af9-4a91-82cb-ed03b73f1b1d-kube-api-access-9gflv\") pod \"manila-operator-controller-manager-67d996989d-dhq9j\" (UID: \"d878ceb7-5af9-4a91-82cb-ed03b73f1b1d\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.210818 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w58sq\" (UniqueName: \"kubernetes.io/projected/71c62d87-8310-4ebd-8449-df18a56dc391-kube-api-access-w58sq\") pod \"keystone-operator-controller-manager-7c789f89c6-pc4fs\" (UID: \"71c62d87-8310-4ebd-8449-df18a56dc391\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.222888 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.224510 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.237290 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2drtk" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.263193 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.278616 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.284156 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ksv\" (UniqueName: \"kubernetes.io/projected/d1eba3e1-a741-4ca6-a97e-c42565f64d2b-kube-api-access-47ksv\") pod \"octavia-operator-controller-manager-5d86c7ddb7-6864w\" (UID: \"d1eba3e1-a741-4ca6-a97e-c42565f64d2b\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.284227 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55pw7\" (UniqueName: \"kubernetes.io/projected/b10f4933-a23d-4c0b-9834-40caa60b158c-kube-api-access-55pw7\") pod \"mariadb-operator-controller-manager-7b6bfb6475-9jwph\" (UID: \"b10f4933-a23d-4c0b-9834-40caa60b158c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.284268 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4gm\" (UniqueName: \"kubernetes.io/projected/d6f3f569-2d6b-4c06-a814-de946397de51-kube-api-access-xs4gm\") pod \"nova-operator-controller-manager-74b6b5dc96-qz976\" (UID: \"d6f3f569-2d6b-4c06-a814-de946397de51\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.284339 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksn97\" (UniqueName: \"kubernetes.io/projected/89a79a12-ce90-47f7-b0c4-c0976d7a4b1f-kube-api-access-ksn97\") pod \"neutron-operator-controller-manager-54688575f-974k8\" (UID: \"89a79a12-ce90-47f7-b0c4-c0976d7a4b1f\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.286661 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.288433 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.291769 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.291979 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2l7xh" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.308743 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.309690 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.320760 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.321591 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.321684 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.326408 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-t9fq7" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.327394 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.330118 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8klmp" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.333409 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55pw7\" (UniqueName: \"kubernetes.io/projected/b10f4933-a23d-4c0b-9834-40caa60b158c-kube-api-access-55pw7\") pod \"mariadb-operator-controller-manager-7b6bfb6475-9jwph\" (UID: \"b10f4933-a23d-4c0b-9834-40caa60b158c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.338621 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.341340 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksn97\" (UniqueName: \"kubernetes.io/projected/89a79a12-ce90-47f7-b0c4-c0976d7a4b1f-kube-api-access-ksn97\") pod \"neutron-operator-controller-manager-54688575f-974k8\" (UID: \"89a79a12-ce90-47f7-b0c4-c0976d7a4b1f\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.355454 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.356255 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.364455 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zgn62" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.373766 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.381575 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.382489 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.388666 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lmg5v" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.398484 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.398653 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47ksv\" (UniqueName: \"kubernetes.io/projected/d1eba3e1-a741-4ca6-a97e-c42565f64d2b-kube-api-access-47ksv\") pod \"octavia-operator-controller-manager-5d86c7ddb7-6864w\" (UID: \"d1eba3e1-a741-4ca6-a97e-c42565f64d2b\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.398695 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj9qc\" (UniqueName: \"kubernetes.io/projected/d498150e-134b-4359-92c6-300b8fbe3b1a-kube-api-access-pj9qc\") pod \"placement-operator-controller-manager-648564c9fc-wbjvw\" (UID: \"d498150e-134b-4359-92c6-300b8fbe3b1a\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.398896 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv6nw\" (UniqueName: \"kubernetes.io/projected/6bc651e4-1359-43b1-bc53-1a561195cf4a-kube-api-access-hv6nw\") pod \"ovn-operator-controller-manager-75684d597f-w9h2t\" (UID: \"6bc651e4-1359-43b1-bc53-1a561195cf4a\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.398924 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4gm\" (UniqueName: \"kubernetes.io/projected/d6f3f569-2d6b-4c06-a814-de946397de51-kube-api-access-xs4gm\") pod \"nova-operator-controller-manager-74b6b5dc96-qz976\" (UID: \"d6f3f569-2d6b-4c06-a814-de946397de51\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.398971 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5m5\" (UniqueName: \"kubernetes.io/projected/528fcc81-e85c-4764-9413-3957ba8c6fd2-kube-api-access-sm5m5\") pod \"swift-operator-controller-manager-9b9ff9f4d-k9s84\" (UID: \"528fcc81-e85c-4764-9413-3957ba8c6fd2\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.399051 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj964\" (UniqueName: \"kubernetes.io/projected/212b84ba-bcda-4820-8388-7d2ef286b7a1-kube-api-access-fj964\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.401922 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.403777 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.406137 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.416518 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4gm\" (UniqueName: \"kubernetes.io/projected/d6f3f569-2d6b-4c06-a814-de946397de51-kube-api-access-xs4gm\") pod \"nova-operator-controller-manager-74b6b5dc96-qz976\" (UID: \"d6f3f569-2d6b-4c06-a814-de946397de51\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.423112 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47ksv\" (UniqueName: \"kubernetes.io/projected/d1eba3e1-a741-4ca6-a97e-c42565f64d2b-kube-api-access-47ksv\") pod \"octavia-operator-controller-manager-5d86c7ddb7-6864w\" (UID: \"d1eba3e1-a741-4ca6-a97e-c42565f64d2b\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.489945 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.501989 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj964\" (UniqueName: \"kubernetes.io/projected/212b84ba-bcda-4820-8388-7d2ef286b7a1-kube-api-access-fj964\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.502054 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmmgh\" (UniqueName: \"kubernetes.io/projected/28a07a44-f359-40b3-a2d4-850cb3822cb4-kube-api-access-nmmgh\") pod \"telemetry-operator-controller-manager-5fdb694969-zffr5\" (UID: \"28a07a44-f359-40b3-a2d4-850cb3822cb4\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.502095 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.502116 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj9qc\" (UniqueName: \"kubernetes.io/projected/d498150e-134b-4359-92c6-300b8fbe3b1a-kube-api-access-pj9qc\") pod \"placement-operator-controller-manager-648564c9fc-wbjvw\" (UID: \"d498150e-134b-4359-92c6-300b8fbe3b1a\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.502165 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv6nw\" (UniqueName: \"kubernetes.io/projected/6bc651e4-1359-43b1-bc53-1a561195cf4a-kube-api-access-hv6nw\") pod \"ovn-operator-controller-manager-75684d597f-w9h2t\" (UID: \"6bc651e4-1359-43b1-bc53-1a561195cf4a\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.502193 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5m5\" (UniqueName: \"kubernetes.io/projected/528fcc81-e85c-4764-9413-3957ba8c6fd2-kube-api-access-sm5m5\") pod \"swift-operator-controller-manager-9b9ff9f4d-k9s84\" (UID: \"528fcc81-e85c-4764-9413-3957ba8c6fd2\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.502704 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.502748 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert podName:212b84ba-bcda-4820-8388-7d2ef286b7a1 nodeName:}" failed. No retries permitted until 2026-03-09 18:40:21.002736144 +0000 UTC m=+958.164112000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" (UID: "212b84ba-bcda-4820-8388-7d2ef286b7a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.503724 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.508536 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.515769 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.519809 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.521811 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-j4hpz" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.528262 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj9qc\" (UniqueName: \"kubernetes.io/projected/d498150e-134b-4359-92c6-300b8fbe3b1a-kube-api-access-pj9qc\") pod \"placement-operator-controller-manager-648564c9fc-wbjvw\" (UID: \"d498150e-134b-4359-92c6-300b8fbe3b1a\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.534107 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv6nw\" (UniqueName: \"kubernetes.io/projected/6bc651e4-1359-43b1-bc53-1a561195cf4a-kube-api-access-hv6nw\") pod \"ovn-operator-controller-manager-75684d597f-w9h2t\" (UID: \"6bc651e4-1359-43b1-bc53-1a561195cf4a\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.535789 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5m5\" (UniqueName: \"kubernetes.io/projected/528fcc81-e85c-4764-9413-3957ba8c6fd2-kube-api-access-sm5m5\") pod \"swift-operator-controller-manager-9b9ff9f4d-k9s84\" (UID: \"528fcc81-e85c-4764-9413-3957ba8c6fd2\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.536137 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj964\" (UniqueName: \"kubernetes.io/projected/212b84ba-bcda-4820-8388-7d2ef286b7a1-kube-api-access-fj964\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.536435 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.553294 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.554695 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.561696 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-p8ld9" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.568575 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.571820 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.586839 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.596777 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.598247 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.599984 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.601576 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kcbtt" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.601753 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.603519 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tvh\" (UniqueName: \"kubernetes.io/projected/fddf64c4-c050-4195-9f07-bbd872ec8d48-kube-api-access-r2tvh\") pod \"watcher-operator-controller-manager-668c5c65dc-657jr\" (UID: \"fddf64c4-c050-4195-9f07-bbd872ec8d48\") " pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.603606 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.603918 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p925v\" (UniqueName: \"kubernetes.io/projected/2e967d7a-a1cf-44b9-ae66-62c4c5c81b55-kube-api-access-p925v\") pod \"test-operator-controller-manager-55b5ff4dbb-z5d4k\" (UID: \"2e967d7a-a1cf-44b9-ae66-62c4c5c81b55\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.603955 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmmgh\" (UniqueName: \"kubernetes.io/projected/28a07a44-f359-40b3-a2d4-850cb3822cb4-kube-api-access-nmmgh\") pod \"telemetry-operator-controller-manager-5fdb694969-zffr5\" (UID: \"28a07a44-f359-40b3-a2d4-850cb3822cb4\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.605072 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.605116 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert podName:c9d3c230-c74c-4cc4-af9f-f23fd5d9557c nodeName:}" failed. No retries permitted until 2026-03-09 18:40:21.605101474 +0000 UTC m=+958.766477330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert") pod "infra-operator-controller-manager-f7fcc58b9-rldv2" (UID: "c9d3c230-c74c-4cc4-af9f-f23fd5d9557c") : secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.611930 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.628853 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.630223 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.638779 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-55pgj" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.645429 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmmgh\" (UniqueName: \"kubernetes.io/projected/28a07a44-f359-40b3-a2d4-850cb3822cb4-kube-api-access-nmmgh\") pod \"telemetry-operator-controller-manager-5fdb694969-zffr5\" (UID: \"28a07a44-f359-40b3-a2d4-850cb3822cb4\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.648656 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.669819 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.701099 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.706297 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.706371 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fghjm\" (UniqueName: \"kubernetes.io/projected/9162d85f-f6f9-4a12-8511-d11676a6398a-kube-api-access-fghjm\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.706421 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.706531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p925v\" (UniqueName: \"kubernetes.io/projected/2e967d7a-a1cf-44b9-ae66-62c4c5c81b55-kube-api-access-p925v\") pod \"test-operator-controller-manager-55b5ff4dbb-z5d4k\" (UID: \"2e967d7a-a1cf-44b9-ae66-62c4c5c81b55\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.707121 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tvh\" (UniqueName: \"kubernetes.io/projected/fddf64c4-c050-4195-9f07-bbd872ec8d48-kube-api-access-r2tvh\") pod \"watcher-operator-controller-manager-668c5c65dc-657jr\" (UID: \"fddf64c4-c050-4195-9f07-bbd872ec8d48\") " pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.707553 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnfjs\" (UniqueName: \"kubernetes.io/projected/172ecee8-2a7b-4e13-b095-ca2a442932d2-kube-api-access-dnfjs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xzdll\" (UID: \"172ecee8-2a7b-4e13-b095-ca2a442932d2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.723123 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.737480 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tvh\" (UniqueName: \"kubernetes.io/projected/fddf64c4-c050-4195-9f07-bbd872ec8d48-kube-api-access-r2tvh\") pod \"watcher-operator-controller-manager-668c5c65dc-657jr\" (UID: \"fddf64c4-c050-4195-9f07-bbd872ec8d48\") " pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.737566 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p925v\" (UniqueName: \"kubernetes.io/projected/2e967d7a-a1cf-44b9-ae66-62c4c5c81b55-kube-api-access-p925v\") pod \"test-operator-controller-manager-55b5ff4dbb-z5d4k\" (UID: \"2e967d7a-a1cf-44b9-ae66-62c4c5c81b55\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.743021 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.776364 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.795743 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.809235 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnfjs\" (UniqueName: \"kubernetes.io/projected/172ecee8-2a7b-4e13-b095-ca2a442932d2-kube-api-access-dnfjs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xzdll\" (UID: \"172ecee8-2a7b-4e13-b095-ca2a442932d2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.813767 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.815288 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fghjm\" (UniqueName: \"kubernetes.io/projected/9162d85f-f6f9-4a12-8511-d11676a6398a-kube-api-access-fghjm\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.817032 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.815880 4821 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.821074 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:21.321020109 +0000 UTC m=+958.482395965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "metrics-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.817269 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: E0309 18:40:20.821370 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:21.321305467 +0000 UTC m=+958.482681403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "webhook-server-cert" not found Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.829760 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnfjs\" (UniqueName: \"kubernetes.io/projected/172ecee8-2a7b-4e13-b095-ca2a442932d2-kube-api-access-dnfjs\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xzdll\" (UID: \"172ecee8-2a7b-4e13-b095-ca2a442932d2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.832406 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fghjm\" (UniqueName: \"kubernetes.io/projected/9162d85f-f6f9-4a12-8511-d11676a6398a-kube-api-access-fghjm\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.840184 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.941227 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.945604 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.973294 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b"] Mar 09 18:40:20 crc kubenswrapper[4821]: I0309 18:40:20.979611 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb"] Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.006640 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.021132 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.021292 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.021350 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert podName:212b84ba-bcda-4820-8388-7d2ef286b7a1 nodeName:}" failed. No retries permitted until 2026-03-09 18:40:22.02133689 +0000 UTC m=+959.182712746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" (UID: "212b84ba-bcda-4820-8388-7d2ef286b7a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.042390 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" event={"ID":"7507717c-322f-43de-88ba-fc79b6a5a3f0","Type":"ContainerStarted","Data":"f36c61a4b7aa30233b71478c51be8adebcbebb086a1b9ca729f498a96b6259e1"} Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.044112 4821 generic.go:334] "Generic (PLEG): container finished" podID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerID="8de31724744317efb427df8d6ecd493bfdc1f409d5ea98e1d4fc2a870ef99ede" exitCode=0 Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.044157 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerDied","Data":"8de31724744317efb427df8d6ecd493bfdc1f409d5ea98e1d4fc2a870ef99ede"} Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.044925 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" event={"ID":"9eb96ad1-a011-482f-bbdd-edfd673217b5","Type":"ContainerStarted","Data":"fe7cad8340c44073027712f5d177059b0f0f99ffab829f4f5d265364856b7607"} Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.180100 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc"] Mar 09 18:40:21 crc kubenswrapper[4821]: W0309 18:40:21.219816 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a1af309_4a43_4d58_8912_abc1ed1e626a.slice/crio-18fcd6845a57e2939420c31287a8c0e7f64d106cefa21c4391c1607ad0611b88 WatchSource:0}: Error finding container 18fcd6845a57e2939420c31287a8c0e7f64d106cefa21c4391c1607ad0611b88: Status 404 returned error can't find the container with id 18fcd6845a57e2939420c31287a8c0e7f64d106cefa21c4391c1607ad0611b88 Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.326625 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.326699 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.326864 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.326911 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:22.3268974 +0000 UTC m=+959.488273256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "webhook-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.327184 4821 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.327289 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:22.327258029 +0000 UTC m=+959.488633985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "metrics-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.631047 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.632832 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: E0309 18:40:21.632895 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert podName:c9d3c230-c74c-4cc4-af9f-f23fd5d9557c nodeName:}" failed. No retries permitted until 2026-03-09 18:40:23.63287585 +0000 UTC m=+960.794251706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert") pod "infra-operator-controller-manager-f7fcc58b9-rldv2" (UID: "c9d3c230-c74c-4cc4-af9f-f23fd5d9557c") : secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.676368 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.717466 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs"] Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.736270 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph"] Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.736285 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-catalog-content\") pod \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.736482 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l8r7\" (UniqueName: \"kubernetes.io/projected/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-kube-api-access-6l8r7\") pod \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.736596 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-utilities\") pod \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\" (UID: \"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c\") " Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.738263 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-utilities" (OuterVolumeSpecName: "utilities") pod "2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" (UID: "2f95ff3a-6cad-4a3f-9b22-f7e265ec269c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.745394 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-kube-api-access-6l8r7" (OuterVolumeSpecName: "kube-api-access-6l8r7") pod "2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" (UID: "2f95ff3a-6cad-4a3f-9b22-f7e265ec269c"). InnerVolumeSpecName "kube-api-access-6l8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:40:21 crc kubenswrapper[4821]: W0309 18:40:21.769292 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb10f4933_a23d_4c0b_9834_40caa60b158c.slice/crio-be164357fc53f59010713b6bedd652d482c345d80c25ff11eab9fe6b7b20caf8 WatchSource:0}: Error finding container be164357fc53f59010713b6bedd652d482c345d80c25ff11eab9fe6b7b20caf8: Status 404 returned error can't find the container with id be164357fc53f59010713b6bedd652d482c345d80c25ff11eab9fe6b7b20caf8 Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.781534 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j"] Mar 09 18:40:21 crc kubenswrapper[4821]: W0309 18:40:21.788651 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod772511ff_89ac_4190_8142_3bf3e4ef8423.slice/crio-4670a0aa52f2fcb152aa6e5d1e680d2f43a11305963145a999d01b641165fa32 WatchSource:0}: Error finding container 4670a0aa52f2fcb152aa6e5d1e680d2f43a11305963145a999d01b641165fa32: Status 404 returned error can't find the container with id 4670a0aa52f2fcb152aa6e5d1e680d2f43a11305963145a999d01b641165fa32 Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.789629 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8"] Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.839184 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.839215 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l8r7\" (UniqueName: \"kubernetes.io/projected/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-kube-api-access-6l8r7\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.868612 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" (UID: "2f95ff3a-6cad-4a3f-9b22-f7e265ec269c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:40:21 crc kubenswrapper[4821]: I0309 18:40:21.941242 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.046062 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.046524 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.057818 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976"] Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.060954 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert podName:212b84ba-bcda-4820-8388-7d2ef286b7a1 nodeName:}" failed. No retries permitted until 2026-03-09 18:40:24.060920338 +0000 UTC m=+961.222296194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" (UID: "212b84ba-bcda-4820-8388-7d2ef286b7a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.060986 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" event={"ID":"100889e4-2f00-4685-a5a7-6f9b73bb343f","Type":"ContainerStarted","Data":"7c366292121124c30136cb413e54e34324f77430904aa509d7095216cd91293f"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.061020 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" event={"ID":"bb4823b7-c205-41c0-ba4d-d909ad9ff9cb","Type":"ContainerStarted","Data":"840569ab40dcc9bb9e1f977058e546af0d08177d632c65839fb78ad4f81debcf"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.061577 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" event={"ID":"0b492a45-c917-4c00-abef-13abf40e71d1","Type":"ContainerStarted","Data":"29eda6758d1bf21f99ba704e27ea9598570b839e14e676d41524f5f0a1376dee"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.069855 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x557z" event={"ID":"2f95ff3a-6cad-4a3f-9b22-f7e265ec269c","Type":"ContainerDied","Data":"e5c0cdb47da92769b4a07b285aeff8d4350f81e0d1d4b707cf6fd8288ef233d8"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.069909 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x557z" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.069921 4821 scope.go:117] "RemoveContainer" containerID="8de31724744317efb427df8d6ecd493bfdc1f409d5ea98e1d4fc2a870ef99ede" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.071193 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w"] Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.083464 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84"] Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.088118 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" event={"ID":"d878ceb7-5af9-4a91-82cb-ed03b73f1b1d","Type":"ContainerStarted","Data":"dc67a2a95b74da9e7276923d4d0edf9c2d3ada70cdfd36bbbf5a0ace86b9a289"} Mar 09 18:40:22 crc kubenswrapper[4821]: W0309 18:40:22.097027 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528fcc81_e85c_4764_9413_3957ba8c6fd2.slice/crio-8b6617722272abb64818e0b5bf4f267f3bcb63b740fcf260f0b57d40540fc0f9 WatchSource:0}: Error finding container 8b6617722272abb64818e0b5bf4f267f3bcb63b740fcf260f0b57d40540fc0f9: Status 404 returned error can't find the container with id 8b6617722272abb64818e0b5bf4f267f3bcb63b740fcf260f0b57d40540fc0f9 Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.097118 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" event={"ID":"b10f4933-a23d-4c0b-9834-40caa60b158c","Type":"ContainerStarted","Data":"be164357fc53f59010713b6bedd652d482c345d80c25ff11eab9fe6b7b20caf8"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.097223 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-974k8"] Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.101208 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" event={"ID":"772511ff-89ac-4190-8142-3bf3e4ef8423","Type":"ContainerStarted","Data":"4670a0aa52f2fcb152aa6e5d1e680d2f43a11305963145a999d01b641165fa32"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.114847 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" event={"ID":"0a1af309-4a43-4d58-8912-abc1ed1e626a","Type":"ContainerStarted","Data":"18fcd6845a57e2939420c31287a8c0e7f64d106cefa21c4391c1607ad0611b88"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.135573 4821 scope.go:117] "RemoveContainer" containerID="6b30cdbc0850ceb0bf9bfe97a99e1df29879895d3310cf2e9301488516f07122" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.137959 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" event={"ID":"71c62d87-8310-4ebd-8449-df18a56dc391","Type":"ContainerStarted","Data":"515e951b63c70b9ba5c4e0b6529be11ac28ebb435b67dfdd609ae56809647a19"} Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.140346 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5"] Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.159914 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw"] Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.166461 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr"] Mar 09 18:40:22 crc kubenswrapper[4821]: W0309 18:40:22.167229 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfddf64c4_c050_4195_9f07_bbd872ec8d48.slice/crio-91c63186504e7db9cf7858570a47f6813167fef3a181d5e313061a83d614c674 WatchSource:0}: Error finding container 91c63186504e7db9cf7858570a47f6813167fef3a181d5e313061a83d614c674: Status 404 returned error can't find the container with id 91c63186504e7db9cf7858570a47f6813167fef3a181d5e313061a83d614c674 Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.169111 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/watcher-operator:2e035aad6e396aeb72cc6aec8684c43e59f8b674,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r2tvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-668c5c65dc-657jr_openstack-operators(fddf64c4-c050-4195-9f07-bbd872ec8d48): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.171400 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.176919 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t"] Mar 09 18:40:22 crc kubenswrapper[4821]: W0309 18:40:22.177098 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e967d7a_a1cf_44b9_ae66_62c4c5c81b55.slice/crio-b62c6e9941d81a55a0ad05d54a7f440e186c031bcb242f4bedb610fff016c343 WatchSource:0}: Error finding container b62c6e9941d81a55a0ad05d54a7f440e186c031bcb242f4bedb610fff016c343: Status 404 returned error can't find the container with id b62c6e9941d81a55a0ad05d54a7f440e186c031bcb242f4bedb610fff016c343 Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.184284 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll"] Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.189564 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k"] Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.190623 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p925v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-z5d4k_openstack-operators(2e967d7a-a1cf-44b9-ae66-62c4c5c81b55): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.191919 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" podUID="2e967d7a-a1cf-44b9-ae66-62c4c5c81b55" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.196109 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x557z"] Mar 09 18:40:22 crc kubenswrapper[4821]: W0309 18:40:22.198283 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd498150e_134b_4359_92c6_300b8fbe3b1a.slice/crio-66d07b2baf87157f3887708c03f3c3e3fb24a2b41c5f65381102899fc4e9d165 WatchSource:0}: Error finding container 66d07b2baf87157f3887708c03f3c3e3fb24a2b41c5f65381102899fc4e9d165: Status 404 returned error can't find the container with id 66d07b2baf87157f3887708c03f3c3e3fb24a2b41c5f65381102899fc4e9d165 Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.201703 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x557z"] Mar 09 18:40:22 crc kubenswrapper[4821]: W0309 18:40:22.205434 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bc651e4_1359_43b1_bc53_1a561195cf4a.slice/crio-48c2486fc5d03061dc260e57620301744614a65c5c515394dbd0678d846f4468 WatchSource:0}: Error finding container 48c2486fc5d03061dc260e57620301744614a65c5c515394dbd0678d846f4468: Status 404 returned error can't find the container with id 48c2486fc5d03061dc260e57620301744614a65c5c515394dbd0678d846f4468 Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.208590 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pj9qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-wbjvw_openstack-operators(d498150e-134b-4359-92c6-300b8fbe3b1a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.210694 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" podUID="d498150e-134b-4359-92c6-300b8fbe3b1a" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.224775 4821 scope.go:117] "RemoveContainer" containerID="5baefcd06f404c0f51ed21006ff4decce43fc1ff5bea166cdf3c4ad0e991df48" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.225182 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hv6nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-w9h2t_openstack-operators(6bc651e4-1359-43b1-bc53-1a561195cf4a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.226288 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" podUID="6bc651e4-1359-43b1-bc53-1a561195cf4a" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.364551 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:22 crc kubenswrapper[4821]: I0309 18:40:22.364624 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.364806 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.364858 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:24.364842763 +0000 UTC m=+961.526218619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "webhook-server-cert" not found Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.365144 4821 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 09 18:40:22 crc kubenswrapper[4821]: E0309 18:40:22.365176 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:24.365166181 +0000 UTC m=+961.526542037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "metrics-server-cert" not found Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.155551 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" event={"ID":"fddf64c4-c050-4195-9f07-bbd872ec8d48","Type":"ContainerStarted","Data":"91c63186504e7db9cf7858570a47f6813167fef3a181d5e313061a83d614c674"} Mar 09 18:40:23 crc kubenswrapper[4821]: E0309 18:40:23.157195 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/watcher-operator:2e035aad6e396aeb72cc6aec8684c43e59f8b674\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.161139 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" event={"ID":"6bc651e4-1359-43b1-bc53-1a561195cf4a","Type":"ContainerStarted","Data":"48c2486fc5d03061dc260e57620301744614a65c5c515394dbd0678d846f4468"} Mar 09 18:40:23 crc kubenswrapper[4821]: E0309 18:40:23.162312 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" podUID="6bc651e4-1359-43b1-bc53-1a561195cf4a" Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.167872 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" event={"ID":"172ecee8-2a7b-4e13-b095-ca2a442932d2","Type":"ContainerStarted","Data":"5902a9790b35029f3a7242402eb7d1ebed898c2bd4c4813e442e5935b52edb91"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.178749 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" event={"ID":"d1eba3e1-a741-4ca6-a97e-c42565f64d2b","Type":"ContainerStarted","Data":"b2ee1bd4119b51c9ed84a234429d2952f8fbe0fe395d9c8232a4eabc948a8cd6"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.185779 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" event={"ID":"d6f3f569-2d6b-4c06-a814-de946397de51","Type":"ContainerStarted","Data":"2840640b813fa2bb090c3569f605c3ae08618993a14aa792ea4a8e672f54c265"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.189141 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" event={"ID":"d498150e-134b-4359-92c6-300b8fbe3b1a","Type":"ContainerStarted","Data":"66d07b2baf87157f3887708c03f3c3e3fb24a2b41c5f65381102899fc4e9d165"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.190719 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" event={"ID":"2e967d7a-a1cf-44b9-ae66-62c4c5c81b55","Type":"ContainerStarted","Data":"b62c6e9941d81a55a0ad05d54a7f440e186c031bcb242f4bedb610fff016c343"} Mar 09 18:40:23 crc kubenswrapper[4821]: E0309 18:40:23.193195 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" podUID="d498150e-134b-4359-92c6-300b8fbe3b1a" Mar 09 18:40:23 crc kubenswrapper[4821]: E0309 18:40:23.193257 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" podUID="2e967d7a-a1cf-44b9-ae66-62c4c5c81b55" Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.194044 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" event={"ID":"28a07a44-f359-40b3-a2d4-850cb3822cb4","Type":"ContainerStarted","Data":"60ba6c999e4d7e1ad51c9e4658e3fd3251727f03d4f894019f8f1013c982268d"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.195662 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" event={"ID":"528fcc81-e85c-4764-9413-3957ba8c6fd2","Type":"ContainerStarted","Data":"8b6617722272abb64818e0b5bf4f267f3bcb63b740fcf260f0b57d40540fc0f9"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.196681 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" event={"ID":"89a79a12-ce90-47f7-b0c4-c0976d7a4b1f","Type":"ContainerStarted","Data":"7c48688fd68fa4d8be7e476367c2e0b4b7ba0e27098b5333daa1c948dc3d9dc6"} Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.592429 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" path="/var/lib/kubelet/pods/2f95ff3a-6cad-4a3f-9b22-f7e265ec269c/volumes" Mar 09 18:40:23 crc kubenswrapper[4821]: I0309 18:40:23.688364 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:23 crc kubenswrapper[4821]: E0309 18:40:23.688563 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:23 crc kubenswrapper[4821]: E0309 18:40:23.688660 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert podName:c9d3c230-c74c-4cc4-af9f-f23fd5d9557c nodeName:}" failed. No retries permitted until 2026-03-09 18:40:27.688624908 +0000 UTC m=+964.850000764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert") pod "infra-operator-controller-manager-f7fcc58b9-rldv2" (UID: "c9d3c230-c74c-4cc4-af9f-f23fd5d9557c") : secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:24 crc kubenswrapper[4821]: I0309 18:40:24.097496 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.097709 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.097789 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert podName:212b84ba-bcda-4820-8388-7d2ef286b7a1 nodeName:}" failed. No retries permitted until 2026-03-09 18:40:28.097770751 +0000 UTC m=+965.259146597 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" (UID: "212b84ba-bcda-4820-8388-7d2ef286b7a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.212666 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" podUID="6bc651e4-1359-43b1-bc53-1a561195cf4a" Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.214486 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" podUID="2e967d7a-a1cf-44b9-ae66-62c4c5c81b55" Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.214580 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" podUID="d498150e-134b-4359-92c6-300b8fbe3b1a" Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.214648 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/watcher-operator:2e035aad6e396aeb72cc6aec8684c43e59f8b674\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" Mar 09 18:40:24 crc kubenswrapper[4821]: I0309 18:40:24.412586 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:24 crc kubenswrapper[4821]: I0309 18:40:24.412646 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.412752 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.412800 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:28.412784797 +0000 UTC m=+965.574160653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "webhook-server-cert" not found Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.413088 4821 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 09 18:40:24 crc kubenswrapper[4821]: E0309 18:40:24.413119 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:28.413110487 +0000 UTC m=+965.574486343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "metrics-server-cert" not found Mar 09 18:40:25 crc kubenswrapper[4821]: I0309 18:40:25.472524 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:25 crc kubenswrapper[4821]: I0309 18:40:25.472579 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:25 crc kubenswrapper[4821]: I0309 18:40:25.524545 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:26 crc kubenswrapper[4821]: I0309 18:40:26.282232 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:26 crc kubenswrapper[4821]: I0309 18:40:26.376547 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tshfs"] Mar 09 18:40:27 crc kubenswrapper[4821]: I0309 18:40:27.761423 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:27 crc kubenswrapper[4821]: E0309 18:40:27.761731 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:27 crc kubenswrapper[4821]: E0309 18:40:27.761787 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert podName:c9d3c230-c74c-4cc4-af9f-f23fd5d9557c nodeName:}" failed. No retries permitted until 2026-03-09 18:40:35.761768528 +0000 UTC m=+972.923144384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert") pod "infra-operator-controller-manager-f7fcc58b9-rldv2" (UID: "c9d3c230-c74c-4cc4-af9f-f23fd5d9557c") : secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:28 crc kubenswrapper[4821]: I0309 18:40:28.172124 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:28 crc kubenswrapper[4821]: E0309 18:40:28.172450 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:28 crc kubenswrapper[4821]: E0309 18:40:28.172545 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert podName:212b84ba-bcda-4820-8388-7d2ef286b7a1 nodeName:}" failed. No retries permitted until 2026-03-09 18:40:36.172520925 +0000 UTC m=+973.333896821 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" (UID: "212b84ba-bcda-4820-8388-7d2ef286b7a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:28 crc kubenswrapper[4821]: I0309 18:40:28.244356 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tshfs" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="registry-server" containerID="cri-o://5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7" gracePeriod=2 Mar 09 18:40:28 crc kubenswrapper[4821]: I0309 18:40:28.477182 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:28 crc kubenswrapper[4821]: I0309 18:40:28.477704 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:28 crc kubenswrapper[4821]: E0309 18:40:28.477468 4821 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 09 18:40:28 crc kubenswrapper[4821]: E0309 18:40:28.478170 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:36.478137306 +0000 UTC m=+973.639513192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "metrics-server-cert" not found Mar 09 18:40:28 crc kubenswrapper[4821]: E0309 18:40:28.479466 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 09 18:40:28 crc kubenswrapper[4821]: E0309 18:40:28.480594 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:36.480564341 +0000 UTC m=+973.641940227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "webhook-server-cert" not found Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.256270 4821 generic.go:334] "Generic (PLEG): container finished" podID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerID="5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7" exitCode=0 Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.256345 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tshfs" event={"ID":"b4893fed-97e2-4ce7-99c7-cef6709e7cb7","Type":"ContainerDied","Data":"5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7"} Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.913958 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.914043 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.914103 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.914845 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c46da8911503c236934f3f2a2bf1a46aa040191100207d7942fc6bf2c08ce6de"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:40:29 crc kubenswrapper[4821]: I0309 18:40:29.914922 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://c46da8911503c236934f3f2a2bf1a46aa040191100207d7942fc6bf2c08ce6de" gracePeriod=600 Mar 09 18:40:30 crc kubenswrapper[4821]: I0309 18:40:30.266977 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="c46da8911503c236934f3f2a2bf1a46aa040191100207d7942fc6bf2c08ce6de" exitCode=0 Mar 09 18:40:30 crc kubenswrapper[4821]: I0309 18:40:30.267067 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"c46da8911503c236934f3f2a2bf1a46aa040191100207d7942fc6bf2c08ce6de"} Mar 09 18:40:30 crc kubenswrapper[4821]: I0309 18:40:30.267140 4821 scope.go:117] "RemoveContainer" containerID="de40e97a09448ee0292ed23dff4aa5fe956489128d71db5d125451ab26a025aa" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.212519 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.213102 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6wwkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-cjvgb_openstack-operators(0a1af309-4a43-4d58-8912-abc1ed1e626a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.214421 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" podUID="0a1af309-4a43-4d58-8912-abc1ed1e626a" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.323285 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" podUID="0a1af309-4a43-4d58-8912-abc1ed1e626a" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.473467 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7 is running failed: container process not found" containerID="5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7" cmd=["grpc_health_probe","-addr=:50051"] Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.474063 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7 is running failed: container process not found" containerID="5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7" cmd=["grpc_health_probe","-addr=:50051"] Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.474333 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7 is running failed: container process not found" containerID="5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7" cmd=["grpc_health_probe","-addr=:50051"] Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.474372 4821 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-tshfs" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="registry-server" Mar 09 18:40:35 crc kubenswrapper[4821]: I0309 18:40:35.782749 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.782946 4821 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.783020 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert podName:c9d3c230-c74c-4cc4-af9f-f23fd5d9557c nodeName:}" failed. No retries permitted until 2026-03-09 18:40:51.783001875 +0000 UTC m=+988.944377731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert") pod "infra-operator-controller-manager-f7fcc58b9-rldv2" (UID: "c9d3c230-c74c-4cc4-af9f-f23fd5d9557c") : secret "infra-operator-webhook-server-cert" not found Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.849806 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.850021 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gflv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-dhq9j_openstack-operators(d878ceb7-5af9-4a91-82cb-ed03b73f1b1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 18:40:35 crc kubenswrapper[4821]: E0309 18:40:35.851294 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" podUID="d878ceb7-5af9-4a91-82cb-ed03b73f1b1d" Mar 09 18:40:36 crc kubenswrapper[4821]: I0309 18:40:36.189549 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.189794 4821 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.189955 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert podName:212b84ba-bcda-4820-8388-7d2ef286b7a1 nodeName:}" failed. No retries permitted until 2026-03-09 18:40:52.18990467 +0000 UTC m=+989.351280616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" (UID: "212b84ba-bcda-4820-8388-7d2ef286b7a1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.330410 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" podUID="d878ceb7-5af9-4a91-82cb-ed03b73f1b1d" Mar 09 18:40:36 crc kubenswrapper[4821]: I0309 18:40:36.496044 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:36 crc kubenswrapper[4821]: I0309 18:40:36.496161 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.496374 4821 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.496394 4821 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.496467 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:52.496442487 +0000 UTC m=+989.657818383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "webhook-server-cert" not found Mar 09 18:40:36 crc kubenswrapper[4821]: E0309 18:40:36.496496 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs podName:9162d85f-f6f9-4a12-8511-d11676a6398a nodeName:}" failed. No retries permitted until 2026-03-09 18:40:52.496483708 +0000 UTC m=+989.657859604 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs") pod "openstack-operator-controller-manager-64797568c9-7qbhc" (UID: "9162d85f-f6f9-4a12-8511-d11676a6398a") : secret "metrics-server-cert" not found Mar 09 18:40:37 crc kubenswrapper[4821]: E0309 18:40:37.526030 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Mar 09 18:40:37 crc kubenswrapper[4821]: E0309 18:40:37.526484 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xs4gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-qz976_openstack-operators(d6f3f569-2d6b-4c06-a814-de946397de51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 18:40:37 crc kubenswrapper[4821]: E0309 18:40:37.527647 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" podUID="d6f3f569-2d6b-4c06-a814-de946397de51" Mar 09 18:40:37 crc kubenswrapper[4821]: E0309 18:40:37.992775 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" Mar 09 18:40:37 crc kubenswrapper[4821]: E0309 18:40:37.992934 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w58sq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7c789f89c6-pc4fs_openstack-operators(71c62d87-8310-4ebd-8449-df18a56dc391): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 18:40:37 crc kubenswrapper[4821]: E0309 18:40:37.994161 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" podUID="71c62d87-8310-4ebd-8449-df18a56dc391" Mar 09 18:40:38 crc kubenswrapper[4821]: E0309 18:40:38.344452 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" podUID="d6f3f569-2d6b-4c06-a814-de946397de51" Mar 09 18:40:38 crc kubenswrapper[4821]: E0309 18:40:38.346503 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" podUID="71c62d87-8310-4ebd-8449-df18a56dc391" Mar 09 18:40:38 crc kubenswrapper[4821]: E0309 18:40:38.371058 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 09 18:40:38 crc kubenswrapper[4821]: E0309 18:40:38.371269 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnfjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xzdll_openstack-operators(172ecee8-2a7b-4e13-b095-ca2a442932d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 18:40:38 crc kubenswrapper[4821]: E0309 18:40:38.372814 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" podUID="172ecee8-2a7b-4e13-b095-ca2a442932d2" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.418791 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.524313 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-utilities\") pod \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.524423 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnpx8\" (UniqueName: \"kubernetes.io/projected/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-kube-api-access-lnpx8\") pod \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.524456 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-catalog-content\") pod \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\" (UID: \"b4893fed-97e2-4ce7-99c7-cef6709e7cb7\") " Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.525155 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-utilities" (OuterVolumeSpecName: "utilities") pod "b4893fed-97e2-4ce7-99c7-cef6709e7cb7" (UID: "b4893fed-97e2-4ce7-99c7-cef6709e7cb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.531944 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.547781 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-kube-api-access-lnpx8" (OuterVolumeSpecName: "kube-api-access-lnpx8") pod "b4893fed-97e2-4ce7-99c7-cef6709e7cb7" (UID: "b4893fed-97e2-4ce7-99c7-cef6709e7cb7"). InnerVolumeSpecName "kube-api-access-lnpx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.548052 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4893fed-97e2-4ce7-99c7-cef6709e7cb7" (UID: "b4893fed-97e2-4ce7-99c7-cef6709e7cb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.633481 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnpx8\" (UniqueName: \"kubernetes.io/projected/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-kube-api-access-lnpx8\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:38 crc kubenswrapper[4821]: I0309 18:40:38.633510 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4893fed-97e2-4ce7-99c7-cef6709e7cb7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:40:39 crc kubenswrapper[4821]: I0309 18:40:39.352952 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tshfs" event={"ID":"b4893fed-97e2-4ce7-99c7-cef6709e7cb7","Type":"ContainerDied","Data":"042908ab760c9dfe451ec014c9a2b471ff1e55263b715ee52890f951395476f7"} Mar 09 18:40:39 crc kubenswrapper[4821]: I0309 18:40:39.352975 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tshfs" Mar 09 18:40:39 crc kubenswrapper[4821]: E0309 18:40:39.355268 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" podUID="172ecee8-2a7b-4e13-b095-ca2a442932d2" Mar 09 18:40:39 crc kubenswrapper[4821]: I0309 18:40:39.392760 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tshfs"] Mar 09 18:40:39 crc kubenswrapper[4821]: I0309 18:40:39.400040 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tshfs"] Mar 09 18:40:39 crc kubenswrapper[4821]: I0309 18:40:39.563365 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" path="/var/lib/kubelet/pods/b4893fed-97e2-4ce7-99c7-cef6709e7cb7/volumes" Mar 09 18:40:39 crc kubenswrapper[4821]: I0309 18:40:39.895133 4821 scope.go:117] "RemoveContainer" containerID="5e22495735fb8d88aa7bf4b596456070d88d3c79053b520e8e3e8221f21235c7" Mar 09 18:40:40 crc kubenswrapper[4821]: I0309 18:40:40.585228 4821 scope.go:117] "RemoveContainer" containerID="4292b4a4299720203a2a8ca518986358218b6f4229c4990759d3bb99b044543c" Mar 09 18:40:40 crc kubenswrapper[4821]: I0309 18:40:40.678304 4821 scope.go:117] "RemoveContainer" containerID="585d2cfaaf9819e9d0af982d346bd44c7b533f51b7174af198a0966a4b6759f8" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.358724 4821 scope.go:117] "RemoveContainer" containerID="52a88d7631b887c0b250aa189cc34d8b10ac13a902d0f37eab607b8efd014210" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.376755 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" event={"ID":"b10f4933-a23d-4c0b-9834-40caa60b158c","Type":"ContainerStarted","Data":"6ed15cec4da67915a72378decbc7786614eb4e57a6846581a9c71c484dc51070"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.376860 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.388141 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" event={"ID":"fddf64c4-c050-4195-9f07-bbd872ec8d48","Type":"ContainerStarted","Data":"b122d014c4241d8c35ad69961f137cf1594e28435e37a845177d149c7b747022"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.388675 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.424460 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" event={"ID":"28a07a44-f359-40b3-a2d4-850cb3822cb4","Type":"ContainerStarted","Data":"0bba6e05a57a845e7bf731c7b284c15bf4e428b2ac59af5e709767784f3933bb"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.425122 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.439171 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" podStartSLOduration=5.874589621 podStartE2EDuration="22.439139037s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.781614895 +0000 UTC m=+958.942990761" lastFinishedPulling="2026-03-09 18:40:38.346164321 +0000 UTC m=+975.507540177" observedRunningTime="2026-03-09 18:40:41.408273985 +0000 UTC m=+978.569649861" watchObservedRunningTime="2026-03-09 18:40:41.439139037 +0000 UTC m=+978.600514893" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.458911 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" event={"ID":"6bc651e4-1359-43b1-bc53-1a561195cf4a","Type":"ContainerStarted","Data":"9894edf46c26d6e481afed572c47904f9f79eb13a3f1856f7f19684121c95716"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.459624 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.471641 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" event={"ID":"528fcc81-e85c-4764-9413-3957ba8c6fd2","Type":"ContainerStarted","Data":"cbe263b5610f57a403eb8d699d9bde2609ff399abdb2b478dd6e9241f9ef9826"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.472331 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.475986 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" podStartSLOduration=5.277062112 podStartE2EDuration="21.475973652s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.152266128 +0000 UTC m=+959.313641984" lastFinishedPulling="2026-03-09 18:40:38.351177668 +0000 UTC m=+975.512553524" observedRunningTime="2026-03-09 18:40:41.472431386 +0000 UTC m=+978.633807242" watchObservedRunningTime="2026-03-09 18:40:41.475973652 +0000 UTC m=+978.637349508" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.477457 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" podStartSLOduration=2.9870072800000003 podStartE2EDuration="21.477440513s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.168739927 +0000 UTC m=+959.330115783" lastFinishedPulling="2026-03-09 18:40:40.65917315 +0000 UTC m=+977.820549016" observedRunningTime="2026-03-09 18:40:41.440011152 +0000 UTC m=+978.601387008" watchObservedRunningTime="2026-03-09 18:40:41.477440513 +0000 UTC m=+978.638816369" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.491463 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" event={"ID":"7507717c-322f-43de-88ba-fc79b6a5a3f0","Type":"ContainerStarted","Data":"58d6468c21cbf6eb5602bff2ff1a4fe5cd01197c710413decbed900bf5a9c479"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.492209 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.497907 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" event={"ID":"d498150e-134b-4359-92c6-300b8fbe3b1a","Type":"ContainerStarted","Data":"50bdfcff0fc0caf2d84acced12bb933d05412cd9aac69be02bbe706e90a789da"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.498605 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.520651 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.538461 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" event={"ID":"772511ff-89ac-4190-8142-3bf3e4ef8423","Type":"ContainerStarted","Data":"51729a8ed287bf5fd3faa0457582797cd537cd4cf8ba8a177d30e334ad6b8544"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.539093 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.554764 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" event={"ID":"100889e4-2f00-4685-a5a7-6f9b73bb343f","Type":"ContainerStarted","Data":"618b8f7f15979424de19e833f8d221a8db0e05f41013534836a6fac878f2e439"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.555448 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.574208 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" event={"ID":"0b492a45-c917-4c00-abef-13abf40e71d1","Type":"ContainerStarted","Data":"f4a9fe5b84d86f5570a371068f5c716b897dbf3d2c3225d7409d5a2de214209c"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.574944 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.577553 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" podStartSLOduration=5.368355395 podStartE2EDuration="21.577535475s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.135639005 +0000 UTC m=+959.297014861" lastFinishedPulling="2026-03-09 18:40:38.344819085 +0000 UTC m=+975.506194941" observedRunningTime="2026-03-09 18:40:41.52458617 +0000 UTC m=+978.685962026" watchObservedRunningTime="2026-03-09 18:40:41.577535475 +0000 UTC m=+978.738911331" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.583960 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" podStartSLOduration=3.203741863 podStartE2EDuration="21.58394354s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.225067502 +0000 UTC m=+959.386443358" lastFinishedPulling="2026-03-09 18:40:40.605269179 +0000 UTC m=+977.766645035" observedRunningTime="2026-03-09 18:40:41.579828207 +0000 UTC m=+978.741204063" watchObservedRunningTime="2026-03-09 18:40:41.58394354 +0000 UTC m=+978.745319396" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.598206 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" event={"ID":"9eb96ad1-a011-482f-bbdd-edfd673217b5","Type":"ContainerStarted","Data":"fa1f7b98ae8197505384df9d0041f493e27c5b8f2dfb6691e94a5eb301b4811a"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.599002 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.599586 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" event={"ID":"2e967d7a-a1cf-44b9-ae66-62c4c5c81b55","Type":"ContainerStarted","Data":"efe78a57fff64e5756c5491ef8ef896d7110b58bb223419f4ad7fc1de2d6d488"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.600103 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.600948 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" event={"ID":"bb4823b7-c205-41c0-ba4d-d909ad9ff9cb","Type":"ContainerStarted","Data":"430792cdf2362a7de90e90308e1c53f1863d737d6fa1c2228306aee57385f844"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.601266 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.602588 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" event={"ID":"d1eba3e1-a741-4ca6-a97e-c42565f64d2b","Type":"ContainerStarted","Data":"ef9b6144ce300997f24e7f2e1f5ff028512ac665edc23456c21d137eccece417"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.602910 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.616197 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" event={"ID":"89a79a12-ce90-47f7-b0c4-c0976d7a4b1f","Type":"ContainerStarted","Data":"20001726ff1b1c97724e772651002c82d36ae525e6afd0ab5807857433d9bd0e"} Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.616863 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.669881 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" podStartSLOduration=6.130042153 podStartE2EDuration="22.669864175s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.806564595 +0000 UTC m=+958.967940451" lastFinishedPulling="2026-03-09 18:40:38.346386617 +0000 UTC m=+975.507762473" observedRunningTime="2026-03-09 18:40:41.644576445 +0000 UTC m=+978.805952301" watchObservedRunningTime="2026-03-09 18:40:41.669864175 +0000 UTC m=+978.831240021" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.700929 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" podStartSLOduration=5.69300663 podStartE2EDuration="22.700913422s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.33716797 +0000 UTC m=+958.498543826" lastFinishedPulling="2026-03-09 18:40:38.345074762 +0000 UTC m=+975.506450618" observedRunningTime="2026-03-09 18:40:41.698172468 +0000 UTC m=+978.859548324" watchObservedRunningTime="2026-03-09 18:40:41.700913422 +0000 UTC m=+978.862289278" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.766470 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" podStartSLOduration=3.34281324 podStartE2EDuration="21.766447661s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.207977907 +0000 UTC m=+959.369353763" lastFinishedPulling="2026-03-09 18:40:40.631612328 +0000 UTC m=+977.792988184" observedRunningTime="2026-03-09 18:40:41.733011898 +0000 UTC m=+978.894387754" watchObservedRunningTime="2026-03-09 18:40:41.766447661 +0000 UTC m=+978.927823517" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.799531 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" podStartSLOduration=5.7693438740000005 podStartE2EDuration="22.799516683s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.317616707 +0000 UTC m=+958.478992553" lastFinishedPulling="2026-03-09 18:40:38.347789506 +0000 UTC m=+975.509165362" observedRunningTime="2026-03-09 18:40:41.766714438 +0000 UTC m=+978.928090294" watchObservedRunningTime="2026-03-09 18:40:41.799516683 +0000 UTC m=+978.960892539" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.800358 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" podStartSLOduration=5.324404785 podStartE2EDuration="22.800354726s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:20.870384285 +0000 UTC m=+958.031760141" lastFinishedPulling="2026-03-09 18:40:38.346334226 +0000 UTC m=+975.507710082" observedRunningTime="2026-03-09 18:40:41.79607252 +0000 UTC m=+978.957448376" watchObservedRunningTime="2026-03-09 18:40:41.800354726 +0000 UTC m=+978.961730582" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.823216 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" podStartSLOduration=6.61296642 podStartE2EDuration="22.823197659s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.13548192 +0000 UTC m=+959.296857766" lastFinishedPulling="2026-03-09 18:40:38.345713149 +0000 UTC m=+975.507089005" observedRunningTime="2026-03-09 18:40:41.822261624 +0000 UTC m=+978.983637480" watchObservedRunningTime="2026-03-09 18:40:41.823197659 +0000 UTC m=+978.984573515" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.895723 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" podStartSLOduration=5.325275292 podStartE2EDuration="22.895701778s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:20.77555798 +0000 UTC m=+957.936933836" lastFinishedPulling="2026-03-09 18:40:38.345984466 +0000 UTC m=+975.507360322" observedRunningTime="2026-03-09 18:40:41.876289748 +0000 UTC m=+979.037665604" watchObservedRunningTime="2026-03-09 18:40:41.895701778 +0000 UTC m=+979.057077634" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.898207 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" podStartSLOduration=5.840176358 podStartE2EDuration="22.898201867s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.287564107 +0000 UTC m=+958.448939963" lastFinishedPulling="2026-03-09 18:40:38.345589606 +0000 UTC m=+975.506965472" observedRunningTime="2026-03-09 18:40:41.894978829 +0000 UTC m=+979.056354685" watchObservedRunningTime="2026-03-09 18:40:41.898201867 +0000 UTC m=+979.059577723" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.918811 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" podStartSLOduration=6.709604808 podStartE2EDuration="22.918797488s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.135769909 +0000 UTC m=+959.297145755" lastFinishedPulling="2026-03-09 18:40:38.344962579 +0000 UTC m=+975.506338435" observedRunningTime="2026-03-09 18:40:41.914945364 +0000 UTC m=+979.076321220" watchObservedRunningTime="2026-03-09 18:40:41.918797488 +0000 UTC m=+979.080173344" Mar 09 18:40:41 crc kubenswrapper[4821]: I0309 18:40:41.938648 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" podStartSLOduration=3.543142737 podStartE2EDuration="21.93862821s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.190461839 +0000 UTC m=+959.351837695" lastFinishedPulling="2026-03-09 18:40:40.585947312 +0000 UTC m=+977.747323168" observedRunningTime="2026-03-09 18:40:41.933225243 +0000 UTC m=+979.094601099" watchObservedRunningTime="2026-03-09 18:40:41.93862821 +0000 UTC m=+979.100004066" Mar 09 18:40:47 crc kubenswrapper[4821]: I0309 18:40:47.665016 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" event={"ID":"d878ceb7-5af9-4a91-82cb-ed03b73f1b1d","Type":"ContainerStarted","Data":"b4e62e18000e0f0eb4b7a9c5b9674ac3f48d5675a7b1344b91b4b8af4592ffd5"} Mar 09 18:40:47 crc kubenswrapper[4821]: I0309 18:40:47.665865 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:40:47 crc kubenswrapper[4821]: I0309 18:40:47.667137 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" event={"ID":"0a1af309-4a43-4d58-8912-abc1ed1e626a","Type":"ContainerStarted","Data":"025d571b8e211fe4906350863d87ac9bfa33b179ff5e69b78ad21c6cce813d5a"} Mar 09 18:40:47 crc kubenswrapper[4821]: I0309 18:40:47.667372 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:40:47 crc kubenswrapper[4821]: I0309 18:40:47.698460 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" podStartSLOduration=3.480523161 podStartE2EDuration="28.698435213s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.788679478 +0000 UTC m=+958.950055334" lastFinishedPulling="2026-03-09 18:40:47.00659153 +0000 UTC m=+984.167967386" observedRunningTime="2026-03-09 18:40:47.68955796 +0000 UTC m=+984.850933836" watchObservedRunningTime="2026-03-09 18:40:47.698435213 +0000 UTC m=+984.859811109" Mar 09 18:40:47 crc kubenswrapper[4821]: I0309 18:40:47.707646 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" podStartSLOduration=2.927983769 podStartE2EDuration="28.707619103s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.228031235 +0000 UTC m=+958.389407091" lastFinishedPulling="2026-03-09 18:40:47.007666569 +0000 UTC m=+984.169042425" observedRunningTime="2026-03-09 18:40:47.706626296 +0000 UTC m=+984.868002172" watchObservedRunningTime="2026-03-09 18:40:47.707619103 +0000 UTC m=+984.868994979" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.057308 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-9vg4l" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.099092 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-5rbnb" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.123789 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-hjztj" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.201032 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-k4q8b" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.266302 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-vhljc" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.377575 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wzvf8" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.522648 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-9jwph" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.543056 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-974k8" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.613434 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-6864w" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.672045 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-w9h2t" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.703791 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-wbjvw" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.733244 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k9s84" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.745459 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-zffr5" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.845100 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-z5d4k" Mar 09 18:40:50 crc kubenswrapper[4821]: I0309 18:40:50.948857 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:40:51 crc kubenswrapper[4821]: I0309 18:40:51.860630 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:51 crc kubenswrapper[4821]: I0309 18:40:51.870980 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c9d3c230-c74c-4cc4-af9f-f23fd5d9557c-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-rldv2\" (UID: \"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.101453 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6tt58" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.108565 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.266753 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.278059 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/212b84ba-bcda-4820-8388-7d2ef286b7a1-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq\" (UID: \"212b84ba-bcda-4820-8388-7d2ef286b7a1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.448729 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2l7xh" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.456893 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.587280 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.587372 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.593602 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-metrics-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.594207 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9162d85f-f6f9-4a12-8511-d11676a6398a-webhook-certs\") pod \"openstack-operator-controller-manager-64797568c9-7qbhc\" (UID: \"9162d85f-f6f9-4a12-8511-d11676a6398a\") " pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.634150 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2"] Mar 09 18:40:52 crc kubenswrapper[4821]: W0309 18:40:52.642847 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9d3c230_c74c_4cc4_af9f_f23fd5d9557c.slice/crio-651df6418ee9671fc748ea6b886f2e5f1b0b99e62c4b97e093e4625410469899 WatchSource:0}: Error finding container 651df6418ee9671fc748ea6b886f2e5f1b0b99e62c4b97e093e4625410469899: Status 404 returned error can't find the container with id 651df6418ee9671fc748ea6b886f2e5f1b0b99e62c4b97e093e4625410469899 Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.724930 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" event={"ID":"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c","Type":"ContainerStarted","Data":"651df6418ee9671fc748ea6b886f2e5f1b0b99e62c4b97e093e4625410469899"} Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.771981 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-kcbtt" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.780529 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:52 crc kubenswrapper[4821]: I0309 18:40:52.983249 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq"] Mar 09 18:40:52 crc kubenswrapper[4821]: W0309 18:40:52.991611 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212b84ba_bcda_4820_8388_7d2ef286b7a1.slice/crio-028752cd6f16780281f9a752dcd86944959438201357608cf422d683aeb2a1f5 WatchSource:0}: Error finding container 028752cd6f16780281f9a752dcd86944959438201357608cf422d683aeb2a1f5: Status 404 returned error can't find the container with id 028752cd6f16780281f9a752dcd86944959438201357608cf422d683aeb2a1f5 Mar 09 18:40:53 crc kubenswrapper[4821]: W0309 18:40:53.270576 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9162d85f_f6f9_4a12_8511_d11676a6398a.slice/crio-347e21b9a3580bfdd9fdaa05673f395dfb412feafcdb45dc2a86bd491083e55f WatchSource:0}: Error finding container 347e21b9a3580bfdd9fdaa05673f395dfb412feafcdb45dc2a86bd491083e55f: Status 404 returned error can't find the container with id 347e21b9a3580bfdd9fdaa05673f395dfb412feafcdb45dc2a86bd491083e55f Mar 09 18:40:53 crc kubenswrapper[4821]: I0309 18:40:53.273332 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc"] Mar 09 18:40:53 crc kubenswrapper[4821]: I0309 18:40:53.743582 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" event={"ID":"9162d85f-f6f9-4a12-8511-d11676a6398a","Type":"ContainerStarted","Data":"347e21b9a3580bfdd9fdaa05673f395dfb412feafcdb45dc2a86bd491083e55f"} Mar 09 18:40:53 crc kubenswrapper[4821]: I0309 18:40:53.745464 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" event={"ID":"212b84ba-bcda-4820-8388-7d2ef286b7a1","Type":"ContainerStarted","Data":"028752cd6f16780281f9a752dcd86944959438201357608cf422d683aeb2a1f5"} Mar 09 18:40:59 crc kubenswrapper[4821]: I0309 18:40:59.806081 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" event={"ID":"9162d85f-f6f9-4a12-8511-d11676a6398a","Type":"ContainerStarted","Data":"a0a1bf377b2b16b43523ed84e99831f9676afa1600c1a98ed96907cb05432e53"} Mar 09 18:40:59 crc kubenswrapper[4821]: I0309 18:40:59.807861 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:40:59 crc kubenswrapper[4821]: I0309 18:40:59.836453 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" podStartSLOduration=39.836431693 podStartE2EDuration="39.836431693s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:40:59.831857618 +0000 UTC m=+996.993233474" watchObservedRunningTime="2026-03-09 18:40:59.836431693 +0000 UTC m=+996.997807549" Mar 09 18:41:00 crc kubenswrapper[4821]: I0309 18:41:00.086285 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-cjvgb" Mar 09 18:41:00 crc kubenswrapper[4821]: I0309 18:41:00.513313 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-dhq9j" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.832130 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" event={"ID":"172ecee8-2a7b-4e13-b095-ca2a442932d2","Type":"ContainerStarted","Data":"6458aedf6b9d27ae36ba39aaed8b3606612067250cf29d816682ac65d42dd602"} Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.835498 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" event={"ID":"71c62d87-8310-4ebd-8449-df18a56dc391","Type":"ContainerStarted","Data":"046482d8a58c54001a018dc2de9444e57aae8c6cdcfd969fd2065b61b9502b1d"} Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.836283 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.837591 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" event={"ID":"212b84ba-bcda-4820-8388-7d2ef286b7a1","Type":"ContainerStarted","Data":"89fecfa29abc199194bb492d18a286467ad4fd7506fc99830970bef6218d5600"} Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.838014 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.839305 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" event={"ID":"d6f3f569-2d6b-4c06-a814-de946397de51","Type":"ContainerStarted","Data":"b74c6b9d42d7826434a4bb18ea283a32ec60897c28555a8950356246658bd680"} Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.839605 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.840868 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" event={"ID":"c9d3c230-c74c-4cc4-af9f-f23fd5d9557c","Type":"ContainerStarted","Data":"474ad1d312cc287f7282251ff7ed933859ac978d1f895687fa89a1ddc9f72f45"} Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.841428 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.858067 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xzdll" podStartSLOduration=3.2817970450000002 podStartE2EDuration="42.858040091s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.157132891 +0000 UTC m=+959.318508747" lastFinishedPulling="2026-03-09 18:41:01.733375937 +0000 UTC m=+998.894751793" observedRunningTime="2026-03-09 18:41:02.849746755 +0000 UTC m=+1000.011122611" watchObservedRunningTime="2026-03-09 18:41:02.858040091 +0000 UTC m=+1000.019415987" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.896092 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" podStartSLOduration=3.907169327 podStartE2EDuration="43.89607568s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:21.738369556 +0000 UTC m=+958.899745412" lastFinishedPulling="2026-03-09 18:41:01.727275909 +0000 UTC m=+998.888651765" observedRunningTime="2026-03-09 18:41:02.892881003 +0000 UTC m=+1000.054256889" watchObservedRunningTime="2026-03-09 18:41:02.89607568 +0000 UTC m=+1000.057451536" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.928735 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" podStartSLOduration=34.176958909 podStartE2EDuration="42.92871394s" podCreationTimestamp="2026-03-09 18:40:20 +0000 UTC" firstStartedPulling="2026-03-09 18:40:52.993654834 +0000 UTC m=+990.155030690" lastFinishedPulling="2026-03-09 18:41:01.745409865 +0000 UTC m=+998.906785721" observedRunningTime="2026-03-09 18:41:02.926762598 +0000 UTC m=+1000.088138464" watchObservedRunningTime="2026-03-09 18:41:02.92871394 +0000 UTC m=+1000.090089796" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.944442 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" podStartSLOduration=34.865007965 podStartE2EDuration="43.944424609s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:52.648582006 +0000 UTC m=+989.809957872" lastFinishedPulling="2026-03-09 18:41:01.72799866 +0000 UTC m=+998.889374516" observedRunningTime="2026-03-09 18:41:02.942725883 +0000 UTC m=+1000.104101759" watchObservedRunningTime="2026-03-09 18:41:02.944424609 +0000 UTC m=+1000.105800495" Mar 09 18:41:02 crc kubenswrapper[4821]: I0309 18:41:02.969196 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" podStartSLOduration=4.386534317 podStartE2EDuration="43.969165485s" podCreationTimestamp="2026-03-09 18:40:19 +0000 UTC" firstStartedPulling="2026-03-09 18:40:22.13580496 +0000 UTC m=+959.297180816" lastFinishedPulling="2026-03-09 18:41:01.718436128 +0000 UTC m=+998.879811984" observedRunningTime="2026-03-09 18:41:02.96424129 +0000 UTC m=+1000.125617156" watchObservedRunningTime="2026-03-09 18:41:02.969165485 +0000 UTC m=+1000.130541361" Mar 09 18:41:10 crc kubenswrapper[4821]: I0309 18:41:10.408271 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-pc4fs" Mar 09 18:41:10 crc kubenswrapper[4821]: I0309 18:41:10.574630 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-qz976" Mar 09 18:41:12 crc kubenswrapper[4821]: I0309 18:41:12.119675 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-rldv2" Mar 09 18:41:12 crc kubenswrapper[4821]: I0309 18:41:12.463941 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq" Mar 09 18:41:12 crc kubenswrapper[4821]: I0309 18:41:12.788670 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64797568c9-7qbhc" Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.578094 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr"] Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.578792 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" containerName="manager" containerID="cri-o://b122d014c4241d8c35ad69961f137cf1594e28435e37a845177d149c7b747022" gracePeriod=10 Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.633794 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n"] Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.634072 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" podUID="788c9cbd-c8f4-4384-945d-991234c151fd" containerName="operator" containerID="cri-o://fd276bd5f449eb5bd7e7c1a28d97ee5f2d43b7bf25291d630b3675c9b832b9eb" gracePeriod=10 Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.962814 4821 generic.go:334] "Generic (PLEG): container finished" podID="fddf64c4-c050-4195-9f07-bbd872ec8d48" containerID="b122d014c4241d8c35ad69961f137cf1594e28435e37a845177d149c7b747022" exitCode=0 Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.962910 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" event={"ID":"fddf64c4-c050-4195-9f07-bbd872ec8d48","Type":"ContainerDied","Data":"b122d014c4241d8c35ad69961f137cf1594e28435e37a845177d149c7b747022"} Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.963211 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" event={"ID":"fddf64c4-c050-4195-9f07-bbd872ec8d48","Type":"ContainerDied","Data":"91c63186504e7db9cf7858570a47f6813167fef3a181d5e313061a83d614c674"} Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.963226 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c63186504e7db9cf7858570a47f6813167fef3a181d5e313061a83d614c674" Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.964568 4821 generic.go:334] "Generic (PLEG): container finished" podID="788c9cbd-c8f4-4384-945d-991234c151fd" containerID="fd276bd5f449eb5bd7e7c1a28d97ee5f2d43b7bf25291d630b3675c9b832b9eb" exitCode=0 Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.964599 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" event={"ID":"788c9cbd-c8f4-4384-945d-991234c151fd","Type":"ContainerDied","Data":"fd276bd5f449eb5bd7e7c1a28d97ee5f2d43b7bf25291d630b3675c9b832b9eb"} Mar 09 18:41:17 crc kubenswrapper[4821]: I0309 18:41:17.990028 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.051749 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2tvh\" (UniqueName: \"kubernetes.io/projected/fddf64c4-c050-4195-9f07-bbd872ec8d48-kube-api-access-r2tvh\") pod \"fddf64c4-c050-4195-9f07-bbd872ec8d48\" (UID: \"fddf64c4-c050-4195-9f07-bbd872ec8d48\") " Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.058440 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddf64c4-c050-4195-9f07-bbd872ec8d48-kube-api-access-r2tvh" (OuterVolumeSpecName: "kube-api-access-r2tvh") pod "fddf64c4-c050-4195-9f07-bbd872ec8d48" (UID: "fddf64c4-c050-4195-9f07-bbd872ec8d48"). InnerVolumeSpecName "kube-api-access-r2tvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.058951 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.152748 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtdbn\" (UniqueName: \"kubernetes.io/projected/788c9cbd-c8f4-4384-945d-991234c151fd-kube-api-access-dtdbn\") pod \"788c9cbd-c8f4-4384-945d-991234c151fd\" (UID: \"788c9cbd-c8f4-4384-945d-991234c151fd\") " Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.153162 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2tvh\" (UniqueName: \"kubernetes.io/projected/fddf64c4-c050-4195-9f07-bbd872ec8d48-kube-api-access-r2tvh\") on node \"crc\" DevicePath \"\"" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.156682 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788c9cbd-c8f4-4384-945d-991234c151fd-kube-api-access-dtdbn" (OuterVolumeSpecName: "kube-api-access-dtdbn") pod "788c9cbd-c8f4-4384-945d-991234c151fd" (UID: "788c9cbd-c8f4-4384-945d-991234c151fd"). InnerVolumeSpecName "kube-api-access-dtdbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.254693 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtdbn\" (UniqueName: \"kubernetes.io/projected/788c9cbd-c8f4-4384-945d-991234c151fd-kube-api-access-dtdbn\") on node \"crc\" DevicePath \"\"" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.976897 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.976904 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" event={"ID":"788c9cbd-c8f4-4384-945d-991234c151fd","Type":"ContainerDied","Data":"ddde7de4558166c8c8ff53c8ef007706cfd433b39e5cd8fdcc0e9c6d60b57f9f"} Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.977432 4821 scope.go:117] "RemoveContainer" containerID="fd276bd5f449eb5bd7e7c1a28d97ee5f2d43b7bf25291d630b3675c9b832b9eb" Mar 09 18:41:18 crc kubenswrapper[4821]: I0309 18:41:18.976968 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n" Mar 09 18:41:19 crc kubenswrapper[4821]: I0309 18:41:19.052429 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr"] Mar 09 18:41:19 crc kubenswrapper[4821]: I0309 18:41:19.063488 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-668c5c65dc-657jr"] Mar 09 18:41:19 crc kubenswrapper[4821]: I0309 18:41:19.068744 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n"] Mar 09 18:41:19 crc kubenswrapper[4821]: I0309 18:41:19.075383 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787cf98cf6-2h56n"] Mar 09 18:41:19 crc kubenswrapper[4821]: I0309 18:41:19.571736 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788c9cbd-c8f4-4384-945d-991234c151fd" path="/var/lib/kubelet/pods/788c9cbd-c8f4-4384-945d-991234c151fd/volumes" Mar 09 18:41:19 crc kubenswrapper[4821]: I0309 18:41:19.572623 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" path="/var/lib/kubelet/pods/fddf64c4-c050-4195-9f07-bbd872ec8d48/volumes" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.244387 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-dm5gn"] Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.244989 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="extract-utilities" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245002 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="extract-utilities" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245021 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788c9cbd-c8f4-4384-945d-991234c151fd" containerName="operator" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245028 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="788c9cbd-c8f4-4384-945d-991234c151fd" containerName="operator" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245037 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="extract-content" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245045 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="extract-content" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245053 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" containerName="manager" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245059 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" containerName="manager" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245071 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="registry-server" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245076 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="registry-server" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245084 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="registry-server" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245089 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="registry-server" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245105 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="extract-utilities" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245111 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="extract-utilities" Mar 09 18:41:22 crc kubenswrapper[4821]: E0309 18:41:22.245120 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="extract-content" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245126 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="extract-content" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245254 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="788c9cbd-c8f4-4384-945d-991234c151fd" containerName="operator" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245265 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4893fed-97e2-4ce7-99c7-cef6709e7cb7" containerName="registry-server" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245275 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f95ff3a-6cad-4a3f-9b22-f7e265ec269c" containerName="registry-server" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245286 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddf64c4-c050-4195-9f07-bbd872ec8d48" containerName="manager" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.245833 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.248939 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-qsgt2" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.250089 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-dm5gn"] Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.320109 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j55f\" (UniqueName: \"kubernetes.io/projected/715ffa8a-8587-432c-8958-927bcf2c6130-kube-api-access-5j55f\") pod \"watcher-operator-index-dm5gn\" (UID: \"715ffa8a-8587-432c-8958-927bcf2c6130\") " pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.424111 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j55f\" (UniqueName: \"kubernetes.io/projected/715ffa8a-8587-432c-8958-927bcf2c6130-kube-api-access-5j55f\") pod \"watcher-operator-index-dm5gn\" (UID: \"715ffa8a-8587-432c-8958-927bcf2c6130\") " pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.447475 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j55f\" (UniqueName: \"kubernetes.io/projected/715ffa8a-8587-432c-8958-927bcf2c6130-kube-api-access-5j55f\") pod \"watcher-operator-index-dm5gn\" (UID: \"715ffa8a-8587-432c-8958-927bcf2c6130\") " pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:22 crc kubenswrapper[4821]: I0309 18:41:22.572908 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:23 crc kubenswrapper[4821]: I0309 18:41:23.107708 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-dm5gn"] Mar 09 18:41:23 crc kubenswrapper[4821]: W0309 18:41:23.123515 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod715ffa8a_8587_432c_8958_927bcf2c6130.slice/crio-0e120f9f48c8594782aa7c6226c56366f1e23f3be0d9e8167801f8424a0037a1 WatchSource:0}: Error finding container 0e120f9f48c8594782aa7c6226c56366f1e23f3be0d9e8167801f8424a0037a1: Status 404 returned error can't find the container with id 0e120f9f48c8594782aa7c6226c56366f1e23f3be0d9e8167801f8424a0037a1 Mar 09 18:41:24 crc kubenswrapper[4821]: I0309 18:41:24.014766 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-dm5gn" event={"ID":"715ffa8a-8587-432c-8958-927bcf2c6130","Type":"ContainerStarted","Data":"be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a"} Mar 09 18:41:24 crc kubenswrapper[4821]: I0309 18:41:24.015121 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-dm5gn" event={"ID":"715ffa8a-8587-432c-8958-927bcf2c6130","Type":"ContainerStarted","Data":"0e120f9f48c8594782aa7c6226c56366f1e23f3be0d9e8167801f8424a0037a1"} Mar 09 18:41:24 crc kubenswrapper[4821]: I0309 18:41:24.032193 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-dm5gn" podStartSLOduration=1.7975193040000002 podStartE2EDuration="2.032175037s" podCreationTimestamp="2026-03-09 18:41:22 +0000 UTC" firstStartedPulling="2026-03-09 18:41:23.124983877 +0000 UTC m=+1020.286359733" lastFinishedPulling="2026-03-09 18:41:23.35963961 +0000 UTC m=+1020.521015466" observedRunningTime="2026-03-09 18:41:24.029225715 +0000 UTC m=+1021.190601611" watchObservedRunningTime="2026-03-09 18:41:24.032175037 +0000 UTC m=+1021.193550893" Mar 09 18:41:25 crc kubenswrapper[4821]: I0309 18:41:25.833522 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-dm5gn"] Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.054968 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-dm5gn" podUID="715ffa8a-8587-432c-8958-927bcf2c6130" containerName="registry-server" containerID="cri-o://be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a" gracePeriod=2 Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.446343 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-snmkh"] Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.448685 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.455039 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-snmkh"] Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.472860 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.499140 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9lk\" (UniqueName: \"kubernetes.io/projected/81e65025-6a00-4e95-83fb-ccf57455d09e-kube-api-access-jm9lk\") pod \"watcher-operator-index-snmkh\" (UID: \"81e65025-6a00-4e95-83fb-ccf57455d09e\") " pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.600083 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j55f\" (UniqueName: \"kubernetes.io/projected/715ffa8a-8587-432c-8958-927bcf2c6130-kube-api-access-5j55f\") pod \"715ffa8a-8587-432c-8958-927bcf2c6130\" (UID: \"715ffa8a-8587-432c-8958-927bcf2c6130\") " Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.601895 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9lk\" (UniqueName: \"kubernetes.io/projected/81e65025-6a00-4e95-83fb-ccf57455d09e-kube-api-access-jm9lk\") pod \"watcher-operator-index-snmkh\" (UID: \"81e65025-6a00-4e95-83fb-ccf57455d09e\") " pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.620546 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/715ffa8a-8587-432c-8958-927bcf2c6130-kube-api-access-5j55f" (OuterVolumeSpecName: "kube-api-access-5j55f") pod "715ffa8a-8587-432c-8958-927bcf2c6130" (UID: "715ffa8a-8587-432c-8958-927bcf2c6130"). InnerVolumeSpecName "kube-api-access-5j55f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.627292 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9lk\" (UniqueName: \"kubernetes.io/projected/81e65025-6a00-4e95-83fb-ccf57455d09e-kube-api-access-jm9lk\") pod \"watcher-operator-index-snmkh\" (UID: \"81e65025-6a00-4e95-83fb-ccf57455d09e\") " pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.703527 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j55f\" (UniqueName: \"kubernetes.io/projected/715ffa8a-8587-432c-8958-927bcf2c6130-kube-api-access-5j55f\") on node \"crc\" DevicePath \"\"" Mar 09 18:41:26 crc kubenswrapper[4821]: I0309 18:41:26.787820 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.066166 4821 generic.go:334] "Generic (PLEG): container finished" podID="715ffa8a-8587-432c-8958-927bcf2c6130" containerID="be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a" exitCode=0 Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.066213 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-dm5gn" event={"ID":"715ffa8a-8587-432c-8958-927bcf2c6130","Type":"ContainerDied","Data":"be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a"} Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.066243 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-dm5gn" event={"ID":"715ffa8a-8587-432c-8958-927bcf2c6130","Type":"ContainerDied","Data":"0e120f9f48c8594782aa7c6226c56366f1e23f3be0d9e8167801f8424a0037a1"} Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.066260 4821 scope.go:117] "RemoveContainer" containerID="be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a" Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.066257 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-dm5gn" Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.075937 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-snmkh"] Mar 09 18:41:27 crc kubenswrapper[4821]: W0309 18:41:27.082670 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81e65025_6a00_4e95_83fb_ccf57455d09e.slice/crio-4fb2e80b825fafd71a332f39d6b51bc67985c9c36cd8c45a4969beb54fe2a34b WatchSource:0}: Error finding container 4fb2e80b825fafd71a332f39d6b51bc67985c9c36cd8c45a4969beb54fe2a34b: Status 404 returned error can't find the container with id 4fb2e80b825fafd71a332f39d6b51bc67985c9c36cd8c45a4969beb54fe2a34b Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.092436 4821 scope.go:117] "RemoveContainer" containerID="be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a" Mar 09 18:41:27 crc kubenswrapper[4821]: E0309 18:41:27.096079 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a\": container with ID starting with be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a not found: ID does not exist" containerID="be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a" Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.096133 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a"} err="failed to get container status \"be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a\": rpc error: code = NotFound desc = could not find container \"be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a\": container with ID starting with be07546fa758d6b4cff4cada2587e44b83935853f9d51443576269f77f85145a not found: ID does not exist" Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.101410 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-dm5gn"] Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.106351 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-dm5gn"] Mar 09 18:41:27 crc kubenswrapper[4821]: I0309 18:41:27.571498 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="715ffa8a-8587-432c-8958-927bcf2c6130" path="/var/lib/kubelet/pods/715ffa8a-8587-432c-8958-927bcf2c6130/volumes" Mar 09 18:41:28 crc kubenswrapper[4821]: I0309 18:41:28.076062 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-snmkh" event={"ID":"81e65025-6a00-4e95-83fb-ccf57455d09e","Type":"ContainerStarted","Data":"7fbd5a4738ebc443e41d6a18a1b6938e728b0027c3070d0750b43bc702410d51"} Mar 09 18:41:28 crc kubenswrapper[4821]: I0309 18:41:28.076119 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-snmkh" event={"ID":"81e65025-6a00-4e95-83fb-ccf57455d09e","Type":"ContainerStarted","Data":"4fb2e80b825fafd71a332f39d6b51bc67985c9c36cd8c45a4969beb54fe2a34b"} Mar 09 18:41:28 crc kubenswrapper[4821]: I0309 18:41:28.098262 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-snmkh" podStartSLOduration=2.048616496 podStartE2EDuration="2.098238301s" podCreationTimestamp="2026-03-09 18:41:26 +0000 UTC" firstStartedPulling="2026-03-09 18:41:27.096029167 +0000 UTC m=+1024.257405033" lastFinishedPulling="2026-03-09 18:41:27.145650982 +0000 UTC m=+1024.307026838" observedRunningTime="2026-03-09 18:41:28.089888153 +0000 UTC m=+1025.251264009" watchObservedRunningTime="2026-03-09 18:41:28.098238301 +0000 UTC m=+1025.259614177" Mar 09 18:41:36 crc kubenswrapper[4821]: I0309 18:41:36.787973 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:36 crc kubenswrapper[4821]: I0309 18:41:36.788402 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:36 crc kubenswrapper[4821]: I0309 18:41:36.837964 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:37 crc kubenswrapper[4821]: I0309 18:41:37.189650 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-snmkh" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.279235 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx"] Mar 09 18:41:40 crc kubenswrapper[4821]: E0309 18:41:40.279819 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="715ffa8a-8587-432c-8958-927bcf2c6130" containerName="registry-server" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.279834 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="715ffa8a-8587-432c-8958-927bcf2c6130" containerName="registry-server" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.280018 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="715ffa8a-8587-432c-8958-927bcf2c6130" containerName="registry-server" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.280939 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.282610 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-thvms" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.290226 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx"] Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.426639 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-util\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.426940 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8ft\" (UniqueName: \"kubernetes.io/projected/175451d8-941f-4b65-a51c-60ec0d7427d1-kube-api-access-kv8ft\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.426969 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-bundle\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.528002 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-util\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.528076 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8ft\" (UniqueName: \"kubernetes.io/projected/175451d8-941f-4b65-a51c-60ec0d7427d1-kube-api-access-kv8ft\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.528103 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-bundle\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.528556 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-util\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.528568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-bundle\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.552434 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8ft\" (UniqueName: \"kubernetes.io/projected/175451d8-941f-4b65-a51c-60ec0d7427d1-kube-api-access-kv8ft\") pod \"76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:40 crc kubenswrapper[4821]: I0309 18:41:40.646104 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:41 crc kubenswrapper[4821]: I0309 18:41:41.103755 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx"] Mar 09 18:41:41 crc kubenswrapper[4821]: I0309 18:41:41.185813 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" event={"ID":"175451d8-941f-4b65-a51c-60ec0d7427d1","Type":"ContainerStarted","Data":"a00ba89bcb45582615e420a5b698d12f489bf8f433a47ccb637e18879dc9daa9"} Mar 09 18:41:42 crc kubenswrapper[4821]: I0309 18:41:42.195546 4821 generic.go:334] "Generic (PLEG): container finished" podID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerID="680aa447f7136ba0ddcac789b8d8d831c3e21bd98389cb9db188784320769ba9" exitCode=0 Mar 09 18:41:42 crc kubenswrapper[4821]: I0309 18:41:42.195617 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" event={"ID":"175451d8-941f-4b65-a51c-60ec0d7427d1","Type":"ContainerDied","Data":"680aa447f7136ba0ddcac789b8d8d831c3e21bd98389cb9db188784320769ba9"} Mar 09 18:41:43 crc kubenswrapper[4821]: I0309 18:41:43.207284 4821 generic.go:334] "Generic (PLEG): container finished" podID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerID="6b2e6f503de40a9790a5f4f3647e2661403bb7485a21a7a14080dcdb01697b92" exitCode=0 Mar 09 18:41:43 crc kubenswrapper[4821]: I0309 18:41:43.207391 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" event={"ID":"175451d8-941f-4b65-a51c-60ec0d7427d1","Type":"ContainerDied","Data":"6b2e6f503de40a9790a5f4f3647e2661403bb7485a21a7a14080dcdb01697b92"} Mar 09 18:41:44 crc kubenswrapper[4821]: I0309 18:41:44.216182 4821 generic.go:334] "Generic (PLEG): container finished" podID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerID="1a0a0548b90687cfb3444355724b157ec346f67a68921d633b9431e266fb50fe" exitCode=0 Mar 09 18:41:44 crc kubenswrapper[4821]: I0309 18:41:44.216230 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" event={"ID":"175451d8-941f-4b65-a51c-60ec0d7427d1","Type":"ContainerDied","Data":"1a0a0548b90687cfb3444355724b157ec346f67a68921d633b9431e266fb50fe"} Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.522485 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.630232 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-bundle\") pod \"175451d8-941f-4b65-a51c-60ec0d7427d1\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.630297 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-util\") pod \"175451d8-941f-4b65-a51c-60ec0d7427d1\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.630422 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8ft\" (UniqueName: \"kubernetes.io/projected/175451d8-941f-4b65-a51c-60ec0d7427d1-kube-api-access-kv8ft\") pod \"175451d8-941f-4b65-a51c-60ec0d7427d1\" (UID: \"175451d8-941f-4b65-a51c-60ec0d7427d1\") " Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.633542 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-bundle" (OuterVolumeSpecName: "bundle") pod "175451d8-941f-4b65-a51c-60ec0d7427d1" (UID: "175451d8-941f-4b65-a51c-60ec0d7427d1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.643741 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175451d8-941f-4b65-a51c-60ec0d7427d1-kube-api-access-kv8ft" (OuterVolumeSpecName: "kube-api-access-kv8ft") pod "175451d8-941f-4b65-a51c-60ec0d7427d1" (UID: "175451d8-941f-4b65-a51c-60ec0d7427d1"). InnerVolumeSpecName "kube-api-access-kv8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.648484 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-util" (OuterVolumeSpecName: "util") pod "175451d8-941f-4b65-a51c-60ec0d7427d1" (UID: "175451d8-941f-4b65-a51c-60ec0d7427d1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.737687 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8ft\" (UniqueName: \"kubernetes.io/projected/175451d8-941f-4b65-a51c-60ec0d7427d1-kube-api-access-kv8ft\") on node \"crc\" DevicePath \"\"" Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.737759 4821 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:41:45 crc kubenswrapper[4821]: I0309 18:41:45.737777 4821 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/175451d8-941f-4b65-a51c-60ec0d7427d1-util\") on node \"crc\" DevicePath \"\"" Mar 09 18:41:46 crc kubenswrapper[4821]: I0309 18:41:46.237716 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" event={"ID":"175451d8-941f-4b65-a51c-60ec0d7427d1","Type":"ContainerDied","Data":"a00ba89bcb45582615e420a5b698d12f489bf8f433a47ccb637e18879dc9daa9"} Mar 09 18:41:46 crc kubenswrapper[4821]: I0309 18:41:46.237766 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00ba89bcb45582615e420a5b698d12f489bf8f433a47ccb637e18879dc9daa9" Mar 09 18:41:46 crc kubenswrapper[4821]: I0309 18:41:46.237834 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.176415 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz"] Mar 09 18:41:51 crc kubenswrapper[4821]: E0309 18:41:51.177115 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="extract" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.177127 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="extract" Mar 09 18:41:51 crc kubenswrapper[4821]: E0309 18:41:51.177141 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="pull" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.177147 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="pull" Mar 09 18:41:51 crc kubenswrapper[4821]: E0309 18:41:51.177167 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="util" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.177174 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="util" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.177305 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="175451d8-941f-4b65-a51c-60ec0d7427d1" containerName="extract" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.177771 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.180930 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.181213 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-p8ld9" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.239459 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz"] Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.332399 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-apiservice-cert\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.332467 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5k49\" (UniqueName: \"kubernetes.io/projected/518630eb-b2b7-4be7-a154-31e2e4525fe1-kube-api-access-k5k49\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.332558 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-webhook-cert\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.433435 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-webhook-cert\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.433793 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-apiservice-cert\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.433814 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5k49\" (UniqueName: \"kubernetes.io/projected/518630eb-b2b7-4be7-a154-31e2e4525fe1-kube-api-access-k5k49\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.439020 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-apiservice-cert\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.439147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-webhook-cert\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.449439 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5k49\" (UniqueName: \"kubernetes.io/projected/518630eb-b2b7-4be7-a154-31e2e4525fe1-kube-api-access-k5k49\") pod \"watcher-operator-controller-manager-75f555f9d6-zj8cz\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:51 crc kubenswrapper[4821]: I0309 18:41:51.499532 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:52 crc kubenswrapper[4821]: I0309 18:41:51.996434 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz"] Mar 09 18:41:52 crc kubenswrapper[4821]: I0309 18:41:52.282040 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" event={"ID":"518630eb-b2b7-4be7-a154-31e2e4525fe1","Type":"ContainerStarted","Data":"903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f"} Mar 09 18:41:52 crc kubenswrapper[4821]: I0309 18:41:52.282089 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" event={"ID":"518630eb-b2b7-4be7-a154-31e2e4525fe1","Type":"ContainerStarted","Data":"db0ff3a094624ee1dff100f32723ac26140a459f7d748ae8127ef7cf264bde80"} Mar 09 18:41:52 crc kubenswrapper[4821]: I0309 18:41:52.282214 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:41:52 crc kubenswrapper[4821]: I0309 18:41:52.304676 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" podStartSLOduration=1.304650855 podStartE2EDuration="1.304650855s" podCreationTimestamp="2026-03-09 18:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:41:52.296832072 +0000 UTC m=+1049.458207938" watchObservedRunningTime="2026-03-09 18:41:52.304650855 +0000 UTC m=+1049.466026721" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.150014 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551362-4l6b5"] Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.151274 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.154715 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.154863 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.154886 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.159290 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551362-4l6b5"] Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.255297 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bggp\" (UniqueName: \"kubernetes.io/projected/dbe98840-dd4e-4195-9627-71f679ccbeea-kube-api-access-5bggp\") pod \"auto-csr-approver-29551362-4l6b5\" (UID: \"dbe98840-dd4e-4195-9627-71f679ccbeea\") " pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.356457 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bggp\" (UniqueName: \"kubernetes.io/projected/dbe98840-dd4e-4195-9627-71f679ccbeea-kube-api-access-5bggp\") pod \"auto-csr-approver-29551362-4l6b5\" (UID: \"dbe98840-dd4e-4195-9627-71f679ccbeea\") " pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.380973 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bggp\" (UniqueName: \"kubernetes.io/projected/dbe98840-dd4e-4195-9627-71f679ccbeea-kube-api-access-5bggp\") pod \"auto-csr-approver-29551362-4l6b5\" (UID: \"dbe98840-dd4e-4195-9627-71f679ccbeea\") " pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.471062 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:00 crc kubenswrapper[4821]: I0309 18:42:00.891463 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551362-4l6b5"] Mar 09 18:42:01 crc kubenswrapper[4821]: I0309 18:42:01.358071 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" event={"ID":"dbe98840-dd4e-4195-9627-71f679ccbeea","Type":"ContainerStarted","Data":"68c16025d67d7a775ecd8a296051341629edb3ff09597775e8003992755d4991"} Mar 09 18:42:01 crc kubenswrapper[4821]: I0309 18:42:01.506878 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:42:02 crc kubenswrapper[4821]: I0309 18:42:02.367925 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" event={"ID":"dbe98840-dd4e-4195-9627-71f679ccbeea","Type":"ContainerStarted","Data":"292d194dec3c2b376499143dee89f028951fbfaeff19f0f3f57efcbf39d62f2b"} Mar 09 18:42:02 crc kubenswrapper[4821]: I0309 18:42:02.393754 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" podStartSLOduration=1.264456375 podStartE2EDuration="2.393733826s" podCreationTimestamp="2026-03-09 18:42:00 +0000 UTC" firstStartedPulling="2026-03-09 18:42:00.897310064 +0000 UTC m=+1058.058685920" lastFinishedPulling="2026-03-09 18:42:02.026587505 +0000 UTC m=+1059.187963371" observedRunningTime="2026-03-09 18:42:02.382775667 +0000 UTC m=+1059.544151533" watchObservedRunningTime="2026-03-09 18:42:02.393733826 +0000 UTC m=+1059.555109682" Mar 09 18:42:02 crc kubenswrapper[4821]: I0309 18:42:02.954537 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv"] Mar 09 18:42:02 crc kubenswrapper[4821]: I0309 18:42:02.955668 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:02 crc kubenswrapper[4821]: I0309 18:42:02.990212 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv"] Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.093998 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e660422-3d8e-4716-b1df-6aa0d193e8f6-webhook-cert\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.094063 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e660422-3d8e-4716-b1df-6aa0d193e8f6-apiservice-cert\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.094511 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rv6\" (UniqueName: \"kubernetes.io/projected/3e660422-3d8e-4716-b1df-6aa0d193e8f6-kube-api-access-r5rv6\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.195664 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rv6\" (UniqueName: \"kubernetes.io/projected/3e660422-3d8e-4716-b1df-6aa0d193e8f6-kube-api-access-r5rv6\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.195776 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e660422-3d8e-4716-b1df-6aa0d193e8f6-webhook-cert\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.195808 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e660422-3d8e-4716-b1df-6aa0d193e8f6-apiservice-cert\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.201849 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3e660422-3d8e-4716-b1df-6aa0d193e8f6-apiservice-cert\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.213336 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3e660422-3d8e-4716-b1df-6aa0d193e8f6-webhook-cert\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.243008 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rv6\" (UniqueName: \"kubernetes.io/projected/3e660422-3d8e-4716-b1df-6aa0d193e8f6-kube-api-access-r5rv6\") pod \"watcher-operator-controller-manager-85b655bd8f-llgvv\" (UID: \"3e660422-3d8e-4716-b1df-6aa0d193e8f6\") " pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.270495 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.379578 4821 generic.go:334] "Generic (PLEG): container finished" podID="dbe98840-dd4e-4195-9627-71f679ccbeea" containerID="292d194dec3c2b376499143dee89f028951fbfaeff19f0f3f57efcbf39d62f2b" exitCode=0 Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.379624 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" event={"ID":"dbe98840-dd4e-4195-9627-71f679ccbeea","Type":"ContainerDied","Data":"292d194dec3c2b376499143dee89f028951fbfaeff19f0f3f57efcbf39d62f2b"} Mar 09 18:42:03 crc kubenswrapper[4821]: W0309 18:42:03.712168 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e660422_3d8e_4716_b1df_6aa0d193e8f6.slice/crio-550b442b273425e782eaab2535120706a41b779d4e802e706dc430feca72217b WatchSource:0}: Error finding container 550b442b273425e782eaab2535120706a41b779d4e802e706dc430feca72217b: Status 404 returned error can't find the container with id 550b442b273425e782eaab2535120706a41b779d4e802e706dc430feca72217b Mar 09 18:42:03 crc kubenswrapper[4821]: I0309 18:42:03.712636 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv"] Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.389950 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" event={"ID":"3e660422-3d8e-4716-b1df-6aa0d193e8f6","Type":"ContainerStarted","Data":"4d74a4602a945bc825f1a6656b5114c0cd7c02ce96cb1e37c834118f5c3d8fa0"} Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.389996 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" event={"ID":"3e660422-3d8e-4716-b1df-6aa0d193e8f6","Type":"ContainerStarted","Data":"550b442b273425e782eaab2535120706a41b779d4e802e706dc430feca72217b"} Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.392800 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.427633 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" podStartSLOduration=2.4276041360000002 podStartE2EDuration="2.427604136s" podCreationTimestamp="2026-03-09 18:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:42:04.421643214 +0000 UTC m=+1061.583019070" watchObservedRunningTime="2026-03-09 18:42:04.427604136 +0000 UTC m=+1061.588980072" Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.759824 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.823164 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bggp\" (UniqueName: \"kubernetes.io/projected/dbe98840-dd4e-4195-9627-71f679ccbeea-kube-api-access-5bggp\") pod \"dbe98840-dd4e-4195-9627-71f679ccbeea\" (UID: \"dbe98840-dd4e-4195-9627-71f679ccbeea\") " Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.832630 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe98840-dd4e-4195-9627-71f679ccbeea-kube-api-access-5bggp" (OuterVolumeSpecName: "kube-api-access-5bggp") pod "dbe98840-dd4e-4195-9627-71f679ccbeea" (UID: "dbe98840-dd4e-4195-9627-71f679ccbeea"). InnerVolumeSpecName "kube-api-access-5bggp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:42:04 crc kubenswrapper[4821]: I0309 18:42:04.924833 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bggp\" (UniqueName: \"kubernetes.io/projected/dbe98840-dd4e-4195-9627-71f679ccbeea-kube-api-access-5bggp\") on node \"crc\" DevicePath \"\"" Mar 09 18:42:05 crc kubenswrapper[4821]: I0309 18:42:05.403628 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" Mar 09 18:42:05 crc kubenswrapper[4821]: I0309 18:42:05.403742 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551362-4l6b5" event={"ID":"dbe98840-dd4e-4195-9627-71f679ccbeea","Type":"ContainerDied","Data":"68c16025d67d7a775ecd8a296051341629edb3ff09597775e8003992755d4991"} Mar 09 18:42:05 crc kubenswrapper[4821]: I0309 18:42:05.403793 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68c16025d67d7a775ecd8a296051341629edb3ff09597775e8003992755d4991" Mar 09 18:42:05 crc kubenswrapper[4821]: I0309 18:42:05.458494 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551356-zfwvf"] Mar 09 18:42:05 crc kubenswrapper[4821]: I0309 18:42:05.465769 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551356-zfwvf"] Mar 09 18:42:05 crc kubenswrapper[4821]: I0309 18:42:05.562394 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df914183-d942-4bef-91f2-14579dc3290d" path="/var/lib/kubelet/pods/df914183-d942-4bef-91f2-14579dc3290d/volumes" Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.278155 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-85b655bd8f-llgvv" Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.412475 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz"] Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.412992 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" podUID="518630eb-b2b7-4be7-a154-31e2e4525fe1" containerName="manager" containerID="cri-o://903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f" gracePeriod=10 Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.890546 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.970717 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-apiservice-cert\") pod \"518630eb-b2b7-4be7-a154-31e2e4525fe1\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.971142 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-webhook-cert\") pod \"518630eb-b2b7-4be7-a154-31e2e4525fe1\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.971167 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5k49\" (UniqueName: \"kubernetes.io/projected/518630eb-b2b7-4be7-a154-31e2e4525fe1-kube-api-access-k5k49\") pod \"518630eb-b2b7-4be7-a154-31e2e4525fe1\" (UID: \"518630eb-b2b7-4be7-a154-31e2e4525fe1\") " Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.976261 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518630eb-b2b7-4be7-a154-31e2e4525fe1-kube-api-access-k5k49" (OuterVolumeSpecName: "kube-api-access-k5k49") pod "518630eb-b2b7-4be7-a154-31e2e4525fe1" (UID: "518630eb-b2b7-4be7-a154-31e2e4525fe1"). InnerVolumeSpecName "kube-api-access-k5k49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.976492 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "518630eb-b2b7-4be7-a154-31e2e4525fe1" (UID: "518630eb-b2b7-4be7-a154-31e2e4525fe1"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:42:13 crc kubenswrapper[4821]: I0309 18:42:13.977061 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "518630eb-b2b7-4be7-a154-31e2e4525fe1" (UID: "518630eb-b2b7-4be7-a154-31e2e4525fe1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.073050 4821 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.073097 4821 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/518630eb-b2b7-4be7-a154-31e2e4525fe1-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.073117 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5k49\" (UniqueName: \"kubernetes.io/projected/518630eb-b2b7-4be7-a154-31e2e4525fe1-kube-api-access-k5k49\") on node \"crc\" DevicePath \"\"" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.476507 4821 generic.go:334] "Generic (PLEG): container finished" podID="518630eb-b2b7-4be7-a154-31e2e4525fe1" containerID="903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f" exitCode=0 Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.476566 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.476566 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" event={"ID":"518630eb-b2b7-4be7-a154-31e2e4525fe1","Type":"ContainerDied","Data":"903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f"} Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.476739 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz" event={"ID":"518630eb-b2b7-4be7-a154-31e2e4525fe1","Type":"ContainerDied","Data":"db0ff3a094624ee1dff100f32723ac26140a459f7d748ae8127ef7cf264bde80"} Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.476769 4821 scope.go:117] "RemoveContainer" containerID="903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.504344 4821 scope.go:117] "RemoveContainer" containerID="903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f" Mar 09 18:42:14 crc kubenswrapper[4821]: E0309 18:42:14.506152 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f\": container with ID starting with 903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f not found: ID does not exist" containerID="903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.506197 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f"} err="failed to get container status \"903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f\": rpc error: code = NotFound desc = could not find container \"903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f\": container with ID starting with 903bd6d0aef85c69d525cdad7114fa463b01a5cffc8fff0f8a4eabd9e03fdc0f not found: ID does not exist" Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.519553 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz"] Mar 09 18:42:14 crc kubenswrapper[4821]: I0309 18:42:14.526692 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75f555f9d6-zj8cz"] Mar 09 18:42:15 crc kubenswrapper[4821]: I0309 18:42:15.566282 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518630eb-b2b7-4be7-a154-31e2e4525fe1" path="/var/lib/kubelet/pods/518630eb-b2b7-4be7-a154-31e2e4525fe1/volumes" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.756371 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Mar 09 18:42:25 crc kubenswrapper[4821]: E0309 18:42:25.757254 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe98840-dd4e-4195-9627-71f679ccbeea" containerName="oc" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.757267 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe98840-dd4e-4195-9627-71f679ccbeea" containerName="oc" Mar 09 18:42:25 crc kubenswrapper[4821]: E0309 18:42:25.757285 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518630eb-b2b7-4be7-a154-31e2e4525fe1" containerName="manager" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.757291 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="518630eb-b2b7-4be7-a154-31e2e4525fe1" containerName="manager" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.757440 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe98840-dd4e-4195-9627-71f679ccbeea" containerName="oc" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.757457 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="518630eb-b2b7-4be7-a154-31e2e4525fe1" containerName="manager" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.758170 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.759695 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-7trls" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.759770 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.759910 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.760259 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.760628 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.760683 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.761658 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.763590 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.764187 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.776256 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832688 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxdt\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-kube-api-access-trxdt\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832754 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832786 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832815 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832839 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832875 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832911 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832940 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832963 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.832983 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.833019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933527 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933598 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933636 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933691 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933753 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trxdt\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-kube-api-access-trxdt\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933805 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933846 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933886 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.933973 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.934012 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.934584 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.934959 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.934961 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.935460 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.935843 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.942000 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.942204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.942537 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.942738 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.942766 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/709c78394ea8004db57384f8e843fee3485225b96fffba51fdbf4c24b5e6b701/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.946076 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.952517 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trxdt\" (UniqueName: \"kubernetes.io/projected/b4cf48ce-38c9-4dd4-b712-311a92dd29b6-kube-api-access-trxdt\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:25 crc kubenswrapper[4821]: I0309 18:42:25.973831 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8fdebe79-7f1a-47fe-a46f-ec73dd78c61f\") pod \"rabbitmq-notifications-server-0\" (UID: \"b4cf48ce-38c9-4dd4-b712-311a92dd29b6\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.032736 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.037113 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.038999 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.039095 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.039110 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.039165 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.039272 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-sxrl4" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.042632 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.043241 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.058484 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.076022 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.241188 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.241568 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.241668 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-config-data\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.241702 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.241767 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace06b27-8092-4676-9bae-4df7c1044b98-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.242060 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.242137 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g72cw\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-kube-api-access-g72cw\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.242211 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace06b27-8092-4676-9bae-4df7c1044b98-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.242247 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.242312 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.242388 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.343568 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.343627 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-config-data\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.343660 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.343684 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace06b27-8092-4676-9bae-4df7c1044b98-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344459 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-config-data\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344554 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344602 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g72cw\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-kube-api-access-g72cw\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344630 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace06b27-8092-4676-9bae-4df7c1044b98-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344685 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344714 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.344962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.345078 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.345112 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.345135 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.345220 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.345852 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ace06b27-8092-4676-9bae-4df7c1044b98-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.351248 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.351541 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.351570 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6bf8033574f0d37c2946ea7d506e34dc75f64e8d82e19dd54b2f0af43ab7cb8/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.356642 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ace06b27-8092-4676-9bae-4df7c1044b98-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.357278 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.360788 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g72cw\" (UniqueName: \"kubernetes.io/projected/ace06b27-8092-4676-9bae-4df7c1044b98-kube-api-access-g72cw\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.362659 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ace06b27-8092-4676-9bae-4df7c1044b98-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.381030 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0001144-58ef-40f3-b3f5-5cdf57b949df\") pod \"rabbitmq-server-0\" (UID: \"ace06b27-8092-4676-9bae-4df7c1044b98\") " pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.566547 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Mar 09 18:42:26 crc kubenswrapper[4821]: W0309 18:42:26.567828 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4cf48ce_38c9_4dd4_b712_311a92dd29b6.slice/crio-94708299835cb2b92939cb7e040c8cc0ba39a40ed0a9438ba86442305ded25e1 WatchSource:0}: Error finding container 94708299835cb2b92939cb7e040c8cc0ba39a40ed0a9438ba86442305ded25e1: Status 404 returned error can't find the container with id 94708299835cb2b92939cb7e040c8cc0ba39a40ed0a9438ba86442305ded25e1 Mar 09 18:42:26 crc kubenswrapper[4821]: I0309 18:42:26.669837 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.100555 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: W0309 18:42:27.106555 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace06b27_8092_4676_9bae_4df7c1044b98.slice/crio-6b178f33e5ad7e09aefaa9d5d8b696749d9bcc474e997300058315b3ed839353 WatchSource:0}: Error finding container 6b178f33e5ad7e09aefaa9d5d8b696749d9bcc474e997300058315b3ed839353: Status 404 returned error can't find the container with id 6b178f33e5ad7e09aefaa9d5d8b696749d9bcc474e997300058315b3ed839353 Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.329497 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.330933 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.333677 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.333874 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.335018 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-jxffp" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.335697 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.336031 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.357285 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472179 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0cec899-aa83-4720-8f75-bc2fc5002a28-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472244 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472360 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-config-data-default\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472390 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0cec899-aa83-4720-8f75-bc2fc5002a28-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472415 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-kolla-config\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472432 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgnqt\" (UniqueName: \"kubernetes.io/projected/e0cec899-aa83-4720-8f75-bc2fc5002a28-kube-api-access-wgnqt\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472526 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.472555 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cec899-aa83-4720-8f75-bc2fc5002a28-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576556 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-config-data-default\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0cec899-aa83-4720-8f75-bc2fc5002a28-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576654 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-kolla-config\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576683 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgnqt\" (UniqueName: \"kubernetes.io/projected/e0cec899-aa83-4720-8f75-bc2fc5002a28-kube-api-access-wgnqt\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576731 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576758 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cec899-aa83-4720-8f75-bc2fc5002a28-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576796 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0cec899-aa83-4720-8f75-bc2fc5002a28-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.576839 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.577882 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-config-data-default\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.577961 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-kolla-config\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.578281 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0cec899-aa83-4720-8f75-bc2fc5002a28-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.581596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cec899-aa83-4720-8f75-bc2fc5002a28-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.587525 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.587571 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d375af4f70f52d9ccd2c5dcc29354d0a92219ffc79f48633b3a08715f6ec2d5f/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.588010 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0cec899-aa83-4720-8f75-bc2fc5002a28-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.588362 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0cec899-aa83-4720-8f75-bc2fc5002a28-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.600940 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"ace06b27-8092-4676-9bae-4df7c1044b98","Type":"ContainerStarted","Data":"6b178f33e5ad7e09aefaa9d5d8b696749d9bcc474e997300058315b3ed839353"} Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.602719 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgnqt\" (UniqueName: \"kubernetes.io/projected/e0cec899-aa83-4720-8f75-bc2fc5002a28-kube-api-access-wgnqt\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.610178 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"b4cf48ce-38c9-4dd4-b712-311a92dd29b6","Type":"ContainerStarted","Data":"94708299835cb2b92939cb7e040c8cc0ba39a40ed0a9438ba86442305ded25e1"} Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.614548 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.621632 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.623985 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.624266 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-pcmms" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.627513 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.637008 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.645522 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d14990-ea6d-41b1-8fea-af5908d400ac\") pod \"openstack-galera-0\" (UID: \"e0cec899-aa83-4720-8f75-bc2fc5002a28\") " pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.670719 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.784998 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-kolla-config\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.785068 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.785108 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-config-data\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.785142 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqqbt\" (UniqueName: \"kubernetes.io/projected/63f25f4d-2a2d-48af-9764-27a0826495b0-kube-api-access-rqqbt\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.785172 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.886236 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-kolla-config\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.886338 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.886377 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-config-data\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.886404 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqqbt\" (UniqueName: \"kubernetes.io/projected/63f25f4d-2a2d-48af-9764-27a0826495b0-kube-api-access-rqqbt\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.886464 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.897354 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-config-data\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.900588 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.907925 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-kolla-config\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.912310 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.921825 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.922988 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.928732 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqqbt\" (UniqueName: \"kubernetes.io/projected/63f25f4d-2a2d-48af-9764-27a0826495b0-kube-api-access-rqqbt\") pod \"memcached-0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.931378 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:42:27 crc kubenswrapper[4821]: I0309 18:42:27.973138 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-84n6g" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.024812 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.125300 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz5dc\" (UniqueName: \"kubernetes.io/projected/0efefaf4-58a1-488a-a9ec-703c46ce0c00-kube-api-access-pz5dc\") pod \"kube-state-metrics-0\" (UID: \"0efefaf4-58a1-488a-a9ec-703c46ce0c00\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.231812 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz5dc\" (UniqueName: \"kubernetes.io/projected/0efefaf4-58a1-488a-a9ec-703c46ce0c00-kube-api-access-pz5dc\") pod \"kube-state-metrics-0\" (UID: \"0efefaf4-58a1-488a-a9ec-703c46ce0c00\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.233466 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.257255 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz5dc\" (UniqueName: \"kubernetes.io/projected/0efefaf4-58a1-488a-a9ec-703c46ce0c00-kube-api-access-pz5dc\") pod \"kube-state-metrics-0\" (UID: \"0efefaf4-58a1-488a-a9ec-703c46ce0c00\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:42:28 crc kubenswrapper[4821]: W0309 18:42:28.271792 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0cec899_aa83_4720_8f75_bc2fc5002a28.slice/crio-b4bcbfbeb8eb4c32571b4e95d88bbf765224fec862d042a482d66495d864a6f7 WatchSource:0}: Error finding container b4bcbfbeb8eb4c32571b4e95d88bbf765224fec862d042a482d66495d864a6f7: Status 404 returned error can't find the container with id b4bcbfbeb8eb4c32571b4e95d88bbf765224fec862d042a482d66495d864a6f7 Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.312992 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.619988 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e0cec899-aa83-4720-8f75-bc2fc5002a28","Type":"ContainerStarted","Data":"b4bcbfbeb8eb4c32571b4e95d88bbf765224fec862d042a482d66495d864a6f7"} Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.675960 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 18:42:28 crc kubenswrapper[4821]: W0309 18:42:28.684251 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63f25f4d_2a2d_48af_9764_27a0826495b0.slice/crio-f130b3f49389381e40dba4562632a8a2a998127c54ca91d55a6a3a04dc156108 WatchSource:0}: Error finding container f130b3f49389381e40dba4562632a8a2a998127c54ca91d55a6a3a04dc156108: Status 404 returned error can't find the container with id f130b3f49389381e40dba4562632a8a2a998127c54ca91d55a6a3a04dc156108 Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.705534 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.707582 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.710806 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.710960 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.711180 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-jjbh2" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.711362 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.711454 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.721381 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.738908 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa689f50-deca-4456-946b-edd730385d48-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.738964 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa689f50-deca-4456-946b-edd730385d48-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.739005 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.739103 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.739173 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/fa689f50-deca-4456-946b-edd730385d48-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.739202 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.739296 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pvc\" (UniqueName: \"kubernetes.io/projected/fa689f50-deca-4456-946b-edd730385d48-kube-api-access-q6pvc\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.814839 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:42:28 crc kubenswrapper[4821]: W0309 18:42:28.832564 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0efefaf4_58a1_488a_a9ec_703c46ce0c00.slice/crio-4f28d7080d3da91a9034b718992c7c2c9b70ab8bbfb155c9881ca054383d293c WatchSource:0}: Error finding container 4f28d7080d3da91a9034b718992c7c2c9b70ab8bbfb155c9881ca054383d293c: Status 404 returned error can't find the container with id 4f28d7080d3da91a9034b718992c7c2c9b70ab8bbfb155c9881ca054383d293c Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840282 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa689f50-deca-4456-946b-edd730385d48-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840360 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840398 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840424 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/fa689f50-deca-4456-946b-edd730385d48-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840445 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840464 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pvc\" (UniqueName: \"kubernetes.io/projected/fa689f50-deca-4456-946b-edd730385d48-kube-api-access-q6pvc\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.840507 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa689f50-deca-4456-946b-edd730385d48-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.843178 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/fa689f50-deca-4456-946b-edd730385d48-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.849411 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.849963 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fa689f50-deca-4456-946b-edd730385d48-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.850072 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fa689f50-deca-4456-946b-edd730385d48-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.850230 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.852631 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fa689f50-deca-4456-946b-edd730385d48-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.858199 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pvc\" (UniqueName: \"kubernetes.io/projected/fa689f50-deca-4456-946b-edd730385d48-kube-api-access-q6pvc\") pod \"alertmanager-metric-storage-0\" (UID: \"fa689f50-deca-4456-946b-edd730385d48\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.964577 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk"] Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.965814 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.972165 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.972435 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-v56qf" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.976790 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0d9e34-5f6c-4503-95a0-6a127c905bee-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.976891 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-929qx\" (UniqueName: \"kubernetes.io/projected/ce0d9e34-5f6c-4503-95a0-6a127c905bee-kube-api-access-929qx\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:28 crc kubenswrapper[4821]: I0309 18:42:28.987730 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk"] Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.030213 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.078490 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0d9e34-5f6c-4503-95a0-6a127c905bee-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.078580 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-929qx\" (UniqueName: \"kubernetes.io/projected/ce0d9e34-5f6c-4503-95a0-6a127c905bee-kube-api-access-929qx\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:29 crc kubenswrapper[4821]: E0309 18:42:29.079041 4821 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Mar 09 18:42:29 crc kubenswrapper[4821]: E0309 18:42:29.079100 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce0d9e34-5f6c-4503-95a0-6a127c905bee-serving-cert podName:ce0d9e34-5f6c-4503-95a0-6a127c905bee nodeName:}" failed. No retries permitted until 2026-03-09 18:42:29.579081908 +0000 UTC m=+1086.740457764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce0d9e34-5f6c-4503-95a0-6a127c905bee-serving-cert") pod "observability-ui-dashboards-66cbf594b5-hkkkk" (UID: "ce0d9e34-5f6c-4503-95a0-6a127c905bee") : secret "observability-ui-dashboards" not found Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.130221 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-929qx\" (UniqueName: \"kubernetes.io/projected/ce0d9e34-5f6c-4503-95a0-6a127c905bee-kube-api-access-929qx\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.189620 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.191840 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.195088 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.195336 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.195376 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.195627 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.196041 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.196752 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.197405 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.200031 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-6hjhr" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.319462 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-859b85d6d-tltz7"] Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.320546 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.344300 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.345491 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-859b85d6d-tltz7"] Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388366 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388421 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388445 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388470 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388487 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-config\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388500 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cdbbf791-a981-4585-a944-863d0e1cc847-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388524 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388575 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388594 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.388613 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89pz\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-kube-api-access-t89pz\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490236 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490285 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-oauth-config\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490437 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-serving-cert\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490467 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490485 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-oauth-serving-cert\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490506 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490526 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89pz\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-kube-api-access-t89pz\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490564 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-service-ca\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490581 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-trusted-ca-bundle\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490595 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbwdg\" (UniqueName: \"kubernetes.io/projected/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-kube-api-access-pbwdg\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490628 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490650 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-config\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490685 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490705 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490733 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490749 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-config\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.490765 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cdbbf791-a981-4585-a944-863d0e1cc847-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.492417 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.492768 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.492993 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.496124 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.496149 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9704a7b49b6380f59d1f734f97de4161168b5e073a0a5af270f11b899d130ccd/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.496953 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.497929 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.502901 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cdbbf791-a981-4585-a944-863d0e1cc847-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.511225 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89pz\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-kube-api-access-t89pz\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.521617 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-config\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.528179 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.536744 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.591898 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-config\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.591980 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-oauth-config\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.592008 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0d9e34-5f6c-4503-95a0-6a127c905bee-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.592032 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-serving-cert\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.592054 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-oauth-serving-cert\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.592099 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-service-ca\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.592113 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-trusted-ca-bundle\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.592131 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbwdg\" (UniqueName: \"kubernetes.io/projected/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-kube-api-access-pbwdg\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.593151 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-config\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.594146 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-oauth-serving-cert\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.594883 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-service-ca\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.596766 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-oauth-config\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.596916 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-trusted-ca-bundle\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.600453 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-console-serving-cert\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.600654 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0d9e34-5f6c-4503-95a0-6a127c905bee-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-hkkkk\" (UID: \"ce0d9e34-5f6c-4503-95a0-6a127c905bee\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.614411 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.628674 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbwdg\" (UniqueName: \"kubernetes.io/projected/0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f-kube-api-access-pbwdg\") pod \"console-859b85d6d-tltz7\" (UID: \"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f\") " pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.635264 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"63f25f4d-2a2d-48af-9764-27a0826495b0","Type":"ContainerStarted","Data":"f130b3f49389381e40dba4562632a8a2a998127c54ca91d55a6a3a04dc156108"} Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.643013 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"0efefaf4-58a1-488a-a9ec-703c46ce0c00","Type":"ContainerStarted","Data":"4f28d7080d3da91a9034b718992c7c2c9b70ab8bbfb155c9881ca054383d293c"} Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.667656 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.679266 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Mar 09 18:42:29 crc kubenswrapper[4821]: W0309 18:42:29.711696 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa689f50_deca_4456_946b_edd730385d48.slice/crio-bfa487d6f461f3b091072b1c851fb925d5c1c39aa7688e3bef1f3b9b37c5890a WatchSource:0}: Error finding container bfa487d6f461f3b091072b1c851fb925d5c1c39aa7688e3bef1f3b9b37c5890a: Status 404 returned error can't find the container with id bfa487d6f461f3b091072b1c851fb925d5c1c39aa7688e3bef1f3b9b37c5890a Mar 09 18:42:29 crc kubenswrapper[4821]: I0309 18:42:29.817046 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.116191 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-859b85d6d-tltz7"] Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.205093 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk"] Mar 09 18:42:30 crc kubenswrapper[4821]: W0309 18:42:30.273897 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0d9e34_5f6c_4503_95a0_6a127c905bee.slice/crio-0c8928567f3fa09fc80c20d01762fcf79f334087432231ce956b8382e355e190 WatchSource:0}: Error finding container 0c8928567f3fa09fc80c20d01762fcf79f334087432231ce956b8382e355e190: Status 404 returned error can't find the container with id 0c8928567f3fa09fc80c20d01762fcf79f334087432231ce956b8382e355e190 Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.526483 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.651437 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"fa689f50-deca-4456-946b-edd730385d48","Type":"ContainerStarted","Data":"bfa487d6f461f3b091072b1c851fb925d5c1c39aa7688e3bef1f3b9b37c5890a"} Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.653670 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" event={"ID":"ce0d9e34-5f6c-4503-95a0-6a127c905bee","Type":"ContainerStarted","Data":"0c8928567f3fa09fc80c20d01762fcf79f334087432231ce956b8382e355e190"} Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.656459 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859b85d6d-tltz7" event={"ID":"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f","Type":"ContainerStarted","Data":"e1f66d2d2f5b252c28ae43335e9a0c95d3866591ca8465875b756f5dc660de9b"} Mar 09 18:42:30 crc kubenswrapper[4821]: I0309 18:42:30.656491 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859b85d6d-tltz7" event={"ID":"0a0c8b8c-30fa-41b0-8598-b8cfc6b1455f","Type":"ContainerStarted","Data":"4b008902db66438497baec50296869bced748af92434357f953c734de1ebbb65"} Mar 09 18:42:32 crc kubenswrapper[4821]: I0309 18:42:32.680432 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerStarted","Data":"37024c368e30bc7b681e207c07c86b14cf67c05e2d442d370250d16c3d271046"} Mar 09 18:42:33 crc kubenswrapper[4821]: I0309 18:42:33.588505 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-859b85d6d-tltz7" podStartSLOduration=4.588482923 podStartE2EDuration="4.588482923s" podCreationTimestamp="2026-03-09 18:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:42:30.67920382 +0000 UTC m=+1087.840579676" watchObservedRunningTime="2026-03-09 18:42:33.588482923 +0000 UTC m=+1090.749858779" Mar 09 18:42:39 crc kubenswrapper[4821]: I0309 18:42:39.668885 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:39 crc kubenswrapper[4821]: I0309 18:42:39.670237 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:39 crc kubenswrapper[4821]: I0309 18:42:39.677201 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:39 crc kubenswrapper[4821]: I0309 18:42:39.746942 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-859b85d6d-tltz7" Mar 09 18:42:39 crc kubenswrapper[4821]: I0309 18:42:39.832898 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-584b867db4-vgt5b"] Mar 09 18:42:41 crc kubenswrapper[4821]: I0309 18:42:41.628552 4821 scope.go:117] "RemoveContainer" containerID="a38788292adee63f271a571b5894e4e71cf4388d4662128eb679d97c041ff1cf" Mar 09 18:42:43 crc kubenswrapper[4821]: E0309 18:42:43.855956 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Mar 09 18:42:43 crc kubenswrapper[4821]: E0309 18:42:43.856292 4821 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Mar 09 18:42:43 crc kubenswrapper[4821]: E0309 18:42:43.856469 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=watcher-kuttl-default],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pz5dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_watcher-kuttl-default(0efefaf4-58a1-488a-a9ec-703c46ce0c00): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 09 18:42:43 crc kubenswrapper[4821]: E0309 18:42:43.857572 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" Mar 09 18:42:44 crc kubenswrapper[4821]: I0309 18:42:44.792297 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" event={"ID":"ce0d9e34-5f6c-4503-95a0-6a127c905bee","Type":"ContainerStarted","Data":"52b2675dd959561548ac34067e9fb978bf4d3cdbdd365ca07adbb8d3667ec224"} Mar 09 18:42:44 crc kubenswrapper[4821]: I0309 18:42:44.794178 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e0cec899-aa83-4720-8f75-bc2fc5002a28","Type":"ContainerStarted","Data":"28bd7be8bfcbf2b3d6f7299b8f98ffa3c252ff28f5be98ea1e50c9174b6f4c5c"} Mar 09 18:42:44 crc kubenswrapper[4821]: I0309 18:42:44.795939 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"63f25f4d-2a2d-48af-9764-27a0826495b0","Type":"ContainerStarted","Data":"24eec4726ae7f56a5ad0de69f8279f1cad1361b22a61142c612e765a006ccf53"} Mar 09 18:42:44 crc kubenswrapper[4821]: I0309 18:42:44.796188 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:44 crc kubenswrapper[4821]: E0309 18:42:44.797663 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" Mar 09 18:42:44 crc kubenswrapper[4821]: I0309 18:42:44.813990 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-hkkkk" podStartSLOduration=3.8991673049999998 podStartE2EDuration="16.813972346s" podCreationTimestamp="2026-03-09 18:42:28 +0000 UTC" firstStartedPulling="2026-03-09 18:42:30.279701617 +0000 UTC m=+1087.441077473" lastFinishedPulling="2026-03-09 18:42:43.194506658 +0000 UTC m=+1100.355882514" observedRunningTime="2026-03-09 18:42:44.811253372 +0000 UTC m=+1101.972629238" watchObservedRunningTime="2026-03-09 18:42:44.813972346 +0000 UTC m=+1101.975348202" Mar 09 18:42:44 crc kubenswrapper[4821]: I0309 18:42:44.878960 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=3.183127989 podStartE2EDuration="17.878943348s" podCreationTimestamp="2026-03-09 18:42:27 +0000 UTC" firstStartedPulling="2026-03-09 18:42:28.689615528 +0000 UTC m=+1085.850991384" lastFinishedPulling="2026-03-09 18:42:43.385430877 +0000 UTC m=+1100.546806743" observedRunningTime="2026-03-09 18:42:44.875081573 +0000 UTC m=+1102.036457429" watchObservedRunningTime="2026-03-09 18:42:44.878943348 +0000 UTC m=+1102.040319204" Mar 09 18:42:45 crc kubenswrapper[4821]: I0309 18:42:45.804616 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"ace06b27-8092-4676-9bae-4df7c1044b98","Type":"ContainerStarted","Data":"434f5b41fdd23c8e5398b4cd3acd47acb695c64125051932930cf77df2648f1a"} Mar 09 18:42:45 crc kubenswrapper[4821]: I0309 18:42:45.806869 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"b4cf48ce-38c9-4dd4-b712-311a92dd29b6","Type":"ContainerStarted","Data":"4552822487d03a0b1d6b871c99a056521b93a3edd1df345423b973c8de8907fd"} Mar 09 18:42:46 crc kubenswrapper[4821]: I0309 18:42:46.817381 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"fa689f50-deca-4456-946b-edd730385d48","Type":"ContainerStarted","Data":"5641cbeca2f9ab9325d7c972e55599fc3a1f78cc0b27ae674453add92b6cdf0e"} Mar 09 18:42:46 crc kubenswrapper[4821]: I0309 18:42:46.821516 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerStarted","Data":"5ea763e93eadf760795016d19da9b0593c6a7cbe992c6c7bf8dd2269b94a11a0"} Mar 09 18:42:47 crc kubenswrapper[4821]: I0309 18:42:47.838943 4821 generic.go:334] "Generic (PLEG): container finished" podID="e0cec899-aa83-4720-8f75-bc2fc5002a28" containerID="28bd7be8bfcbf2b3d6f7299b8f98ffa3c252ff28f5be98ea1e50c9174b6f4c5c" exitCode=0 Mar 09 18:42:47 crc kubenswrapper[4821]: I0309 18:42:47.839032 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e0cec899-aa83-4720-8f75-bc2fc5002a28","Type":"ContainerDied","Data":"28bd7be8bfcbf2b3d6f7299b8f98ffa3c252ff28f5be98ea1e50c9174b6f4c5c"} Mar 09 18:42:48 crc kubenswrapper[4821]: I0309 18:42:48.849207 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"e0cec899-aa83-4720-8f75-bc2fc5002a28","Type":"ContainerStarted","Data":"8072c50d2f3ac46b94254ada64d3a769b285bde1ff5f8be5bca9644c41b6da62"} Mar 09 18:42:48 crc kubenswrapper[4821]: I0309 18:42:48.879262 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=7.8833583990000005 podStartE2EDuration="22.879241435s" podCreationTimestamp="2026-03-09 18:42:26 +0000 UTC" firstStartedPulling="2026-03-09 18:42:28.287699739 +0000 UTC m=+1085.449075595" lastFinishedPulling="2026-03-09 18:42:43.283582765 +0000 UTC m=+1100.444958631" observedRunningTime="2026-03-09 18:42:48.864726931 +0000 UTC m=+1106.026102787" watchObservedRunningTime="2026-03-09 18:42:48.879241435 +0000 UTC m=+1106.040617291" Mar 09 18:42:53 crc kubenswrapper[4821]: I0309 18:42:53.026555 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Mar 09 18:42:53 crc kubenswrapper[4821]: I0309 18:42:53.888269 4821 generic.go:334] "Generic (PLEG): container finished" podID="fa689f50-deca-4456-946b-edd730385d48" containerID="5641cbeca2f9ab9325d7c972e55599fc3a1f78cc0b27ae674453add92b6cdf0e" exitCode=0 Mar 09 18:42:53 crc kubenswrapper[4821]: I0309 18:42:53.888397 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"fa689f50-deca-4456-946b-edd730385d48","Type":"ContainerDied","Data":"5641cbeca2f9ab9325d7c972e55599fc3a1f78cc0b27ae674453add92b6cdf0e"} Mar 09 18:42:53 crc kubenswrapper[4821]: I0309 18:42:53.891642 4821 generic.go:334] "Generic (PLEG): container finished" podID="cdbbf791-a981-4585-a944-863d0e1cc847" containerID="5ea763e93eadf760795016d19da9b0593c6a7cbe992c6c7bf8dd2269b94a11a0" exitCode=0 Mar 09 18:42:53 crc kubenswrapper[4821]: I0309 18:42:53.891691 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerDied","Data":"5ea763e93eadf760795016d19da9b0593c6a7cbe992c6c7bf8dd2269b94a11a0"} Mar 09 18:42:56 crc kubenswrapper[4821]: I0309 18:42:56.920597 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"fa689f50-deca-4456-946b-edd730385d48","Type":"ContainerStarted","Data":"49230a293454061825af16f63bd0d280bbe3891694386e468238d632b478c238"} Mar 09 18:42:57 crc kubenswrapper[4821]: I0309 18:42:57.671870 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:57 crc kubenswrapper[4821]: I0309 18:42:57.672223 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:57 crc kubenswrapper[4821]: I0309 18:42:57.760170 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:58 crc kubenswrapper[4821]: I0309 18:42:58.008723 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Mar 09 18:42:58 crc kubenswrapper[4821]: I0309 18:42:58.941305 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"fa689f50-deca-4456-946b-edd730385d48","Type":"ContainerStarted","Data":"b11ddbb82f1b3d2930beb9bb7ec64961dab4ab081c55afbb727f854a124d3642"} Mar 09 18:42:58 crc kubenswrapper[4821]: I0309 18:42:58.968581 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=4.480599071 podStartE2EDuration="30.968560042s" podCreationTimestamp="2026-03-09 18:42:28 +0000 UTC" firstStartedPulling="2026-03-09 18:42:29.720841793 +0000 UTC m=+1086.882217649" lastFinishedPulling="2026-03-09 18:42:56.208802764 +0000 UTC m=+1113.370178620" observedRunningTime="2026-03-09 18:42:58.960604046 +0000 UTC m=+1116.121979902" watchObservedRunningTime="2026-03-09 18:42:58.968560042 +0000 UTC m=+1116.129935918" Mar 09 18:42:59 crc kubenswrapper[4821]: I0309 18:42:59.031505 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:59 crc kubenswrapper[4821]: I0309 18:42:59.037351 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Mar 09 18:42:59 crc kubenswrapper[4821]: I0309 18:42:59.913814 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:42:59 crc kubenswrapper[4821]: I0309 18:42:59.913869 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:43:00 crc kubenswrapper[4821]: I0309 18:43:00.955915 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"0efefaf4-58a1-488a-a9ec-703c46ce0c00","Type":"ContainerStarted","Data":"89803e16529ec071895930a757ab9f0a3895a84f30855c2d4a062a921f76a4c4"} Mar 09 18:43:00 crc kubenswrapper[4821]: I0309 18:43:00.956455 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:43:00 crc kubenswrapper[4821]: I0309 18:43:00.958493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerStarted","Data":"7eb24db081b703d9c289b75b1749c9b9d0e92c4171eb6f6aee81d7d2b9cd32aa"} Mar 09 18:43:00 crc kubenswrapper[4821]: I0309 18:43:00.979119 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.4644401289999998 podStartE2EDuration="33.979100807s" podCreationTimestamp="2026-03-09 18:42:27 +0000 UTC" firstStartedPulling="2026-03-09 18:42:28.835817968 +0000 UTC m=+1085.997193814" lastFinishedPulling="2026-03-09 18:43:00.350478636 +0000 UTC m=+1117.511854492" observedRunningTime="2026-03-09 18:43:00.973225307 +0000 UTC m=+1118.134601163" watchObservedRunningTime="2026-03-09 18:43:00.979100807 +0000 UTC m=+1118.140476673" Mar 09 18:43:02 crc kubenswrapper[4821]: I0309 18:43:02.976479 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerStarted","Data":"9b80a265cda94da3dd0b207ba9770ed04a1cee94b40887deb99e7551a06f983f"} Mar 09 18:43:04 crc kubenswrapper[4821]: I0309 18:43:04.882981 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-584b867db4-vgt5b" podUID="a749c63c-1f04-4955-9a98-fabbf677badc" containerName="console" containerID="cri-o://a58088a31a02ae9a84fcbb76e3efaafda09fcf01ee7b543d815bcfe25bfe5708" gracePeriod=15 Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.000604 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-584b867db4-vgt5b_a749c63c-1f04-4955-9a98-fabbf677badc/console/0.log" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.000862 4821 generic.go:334] "Generic (PLEG): container finished" podID="a749c63c-1f04-4955-9a98-fabbf677badc" containerID="a58088a31a02ae9a84fcbb76e3efaafda09fcf01ee7b543d815bcfe25bfe5708" exitCode=2 Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.000886 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-584b867db4-vgt5b" event={"ID":"a749c63c-1f04-4955-9a98-fabbf677badc","Type":"ContainerDied","Data":"a58088a31a02ae9a84fcbb76e3efaafda09fcf01ee7b543d815bcfe25bfe5708"} Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.401493 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-h56pf"] Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.402624 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.404738 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.457173 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-h56pf"] Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.511870 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2l5z\" (UniqueName: \"kubernetes.io/projected/36dce1ce-dd03-42be-a792-5e198c405b1b-kube-api-access-c2l5z\") pod \"root-account-create-update-h56pf\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.511939 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36dce1ce-dd03-42be-a792-5e198c405b1b-operator-scripts\") pod \"root-account-create-update-h56pf\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.613760 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2l5z\" (UniqueName: \"kubernetes.io/projected/36dce1ce-dd03-42be-a792-5e198c405b1b-kube-api-access-c2l5z\") pod \"root-account-create-update-h56pf\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.613859 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36dce1ce-dd03-42be-a792-5e198c405b1b-operator-scripts\") pod \"root-account-create-update-h56pf\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.615632 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36dce1ce-dd03-42be-a792-5e198c405b1b-operator-scripts\") pod \"root-account-create-update-h56pf\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.635510 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2l5z\" (UniqueName: \"kubernetes.io/projected/36dce1ce-dd03-42be-a792-5e198c405b1b-kube-api-access-c2l5z\") pod \"root-account-create-update-h56pf\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:06 crc kubenswrapper[4821]: I0309 18:43:06.736904 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.527815 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-vtbsg"] Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.530207 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.537535 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq"] Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.538685 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.541478 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.562693 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-vtbsg"] Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.562736 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq"] Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.630227 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjbg\" (UniqueName: \"kubernetes.io/projected/5ac7235a-20f5-458c-9d93-e7221cd8b83f-kube-api-access-swjbg\") pod \"keystone-6d8a-account-create-update-vgzbq\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.630347 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-operator-scripts\") pod \"keystone-db-create-vtbsg\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.630463 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac7235a-20f5-458c-9d93-e7221cd8b83f-operator-scripts\") pod \"keystone-6d8a-account-create-update-vgzbq\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.631095 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvkd\" (UniqueName: \"kubernetes.io/projected/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-kube-api-access-fdvkd\") pod \"keystone-db-create-vtbsg\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.732675 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swjbg\" (UniqueName: \"kubernetes.io/projected/5ac7235a-20f5-458c-9d93-e7221cd8b83f-kube-api-access-swjbg\") pod \"keystone-6d8a-account-create-update-vgzbq\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.732811 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-operator-scripts\") pod \"keystone-db-create-vtbsg\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.732885 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac7235a-20f5-458c-9d93-e7221cd8b83f-operator-scripts\") pod \"keystone-6d8a-account-create-update-vgzbq\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.732957 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdvkd\" (UniqueName: \"kubernetes.io/projected/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-kube-api-access-fdvkd\") pod \"keystone-db-create-vtbsg\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.734020 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-operator-scripts\") pod \"keystone-db-create-vtbsg\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.734372 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac7235a-20f5-458c-9d93-e7221cd8b83f-operator-scripts\") pod \"keystone-6d8a-account-create-update-vgzbq\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.754496 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdvkd\" (UniqueName: \"kubernetes.io/projected/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-kube-api-access-fdvkd\") pod \"keystone-db-create-vtbsg\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.754552 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swjbg\" (UniqueName: \"kubernetes.io/projected/5ac7235a-20f5-458c-9d93-e7221cd8b83f-kube-api-access-swjbg\") pod \"keystone-6d8a-account-create-update-vgzbq\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.852540 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:07 crc kubenswrapper[4821]: I0309 18:43:07.859774 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:08 crc kubenswrapper[4821]: I0309 18:43:08.320803 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.230954 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-vtbsg"] Mar 09 18:43:09 crc kubenswrapper[4821]: W0309 18:43:09.244189 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cd3f65f_04e8_4e03_916b_9fa01bed65f5.slice/crio-323e720b99a0ed4537df56c1c00578f0d5e607501317e59695d6cd0f938bb402 WatchSource:0}: Error finding container 323e720b99a0ed4537df56c1c00578f0d5e607501317e59695d6cd0f938bb402: Status 404 returned error can't find the container with id 323e720b99a0ed4537df56c1c00578f0d5e607501317e59695d6cd0f938bb402 Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.257064 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-h56pf"] Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.324440 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq"] Mar 09 18:43:09 crc kubenswrapper[4821]: W0309 18:43:09.329800 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ac7235a_20f5_458c_9d93_e7221cd8b83f.slice/crio-71da46d67b0205a3e7944f14932b72936b76daf47e3e3ea3428701ad37cb0cb7 WatchSource:0}: Error finding container 71da46d67b0205a3e7944f14932b72936b76daf47e3e3ea3428701ad37cb0cb7: Status 404 returned error can't find the container with id 71da46d67b0205a3e7944f14932b72936b76daf47e3e3ea3428701ad37cb0cb7 Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.580914 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-584b867db4-vgt5b_a749c63c-1f04-4955-9a98-fabbf677badc/console/0.log" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.581429 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.669692 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-oauth-serving-cert\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.669754 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-console-config\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.669793 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-service-ca\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.669866 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-oauth-config\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.669924 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-serving-cert\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.669955 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-trusted-ca-bundle\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.670011 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq8gn\" (UniqueName: \"kubernetes.io/projected/a749c63c-1f04-4955-9a98-fabbf677badc-kube-api-access-xq8gn\") pod \"a749c63c-1f04-4955-9a98-fabbf677badc\" (UID: \"a749c63c-1f04-4955-9a98-fabbf677badc\") " Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.670445 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-console-config" (OuterVolumeSpecName: "console-config") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.670634 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.670934 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-service-ca" (OuterVolumeSpecName: "service-ca") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.671227 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.674893 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a749c63c-1f04-4955-9a98-fabbf677badc-kube-api-access-xq8gn" (OuterVolumeSpecName: "kube-api-access-xq8gn") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "kube-api-access-xq8gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.675079 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.675297 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a749c63c-1f04-4955-9a98-fabbf677badc" (UID: "a749c63c-1f04-4955-9a98-fabbf677badc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772309 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq8gn\" (UniqueName: \"kubernetes.io/projected/a749c63c-1f04-4955-9a98-fabbf677badc-kube-api-access-xq8gn\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772367 4821 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772379 4821 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-console-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772390 4821 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-service-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772403 4821 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772415 4821 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a749c63c-1f04-4955-9a98-fabbf677badc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:09 crc kubenswrapper[4821]: I0309 18:43:09.772425 4821 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a749c63c-1f04-4955-9a98-fabbf677badc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.034627 4821 generic.go:334] "Generic (PLEG): container finished" podID="3cd3f65f-04e8-4e03-916b-9fa01bed65f5" containerID="a3095a86b96a71356ce2784b435599b07345066b982169670a5231fd6c82dea2" exitCode=0 Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.034705 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-vtbsg" event={"ID":"3cd3f65f-04e8-4e03-916b-9fa01bed65f5","Type":"ContainerDied","Data":"a3095a86b96a71356ce2784b435599b07345066b982169670a5231fd6c82dea2"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.034751 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-vtbsg" event={"ID":"3cd3f65f-04e8-4e03-916b-9fa01bed65f5","Type":"ContainerStarted","Data":"323e720b99a0ed4537df56c1c00578f0d5e607501317e59695d6cd0f938bb402"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.038764 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerStarted","Data":"84f1c6cb337c6a947e82328fedd8b4d7ff224678d6d42cf3ce52ec54db167910"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.042003 4821 generic.go:334] "Generic (PLEG): container finished" podID="5ac7235a-20f5-458c-9d93-e7221cd8b83f" containerID="cab68cb26a7cfe543b61c64f1c12db11095115069ae7e5bdd48b1d602b6ab924" exitCode=0 Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.042043 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" event={"ID":"5ac7235a-20f5-458c-9d93-e7221cd8b83f","Type":"ContainerDied","Data":"cab68cb26a7cfe543b61c64f1c12db11095115069ae7e5bdd48b1d602b6ab924"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.042058 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" event={"ID":"5ac7235a-20f5-458c-9d93-e7221cd8b83f","Type":"ContainerStarted","Data":"71da46d67b0205a3e7944f14932b72936b76daf47e3e3ea3428701ad37cb0cb7"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.044065 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-584b867db4-vgt5b_a749c63c-1f04-4955-9a98-fabbf677badc/console/0.log" Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.044252 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-584b867db4-vgt5b" Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.044352 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-584b867db4-vgt5b" event={"ID":"a749c63c-1f04-4955-9a98-fabbf677badc","Type":"ContainerDied","Data":"cccee2036e254ea8851aacfea71a77754083dedad4b62e2ff18b1a5439176372"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.044439 4821 scope.go:117] "RemoveContainer" containerID="a58088a31a02ae9a84fcbb76e3efaafda09fcf01ee7b543d815bcfe25bfe5708" Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.052463 4821 generic.go:334] "Generic (PLEG): container finished" podID="36dce1ce-dd03-42be-a792-5e198c405b1b" containerID="1bc991cbb462326a9bcd11d53fd2157a64cd2acfd2aa68d90c073c9897d74650" exitCode=0 Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.052522 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-h56pf" event={"ID":"36dce1ce-dd03-42be-a792-5e198c405b1b","Type":"ContainerDied","Data":"1bc991cbb462326a9bcd11d53fd2157a64cd2acfd2aa68d90c073c9897d74650"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.052556 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-h56pf" event={"ID":"36dce1ce-dd03-42be-a792-5e198c405b1b","Type":"ContainerStarted","Data":"8047c9c070110ac349761999a290de39b04db1ec19d5404cede8f322f5757876"} Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.102063 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=4.786056894 podStartE2EDuration="42.102043293s" podCreationTimestamp="2026-03-09 18:42:28 +0000 UTC" firstStartedPulling="2026-03-09 18:42:32.320445024 +0000 UTC m=+1089.481820920" lastFinishedPulling="2026-03-09 18:43:09.636431463 +0000 UTC m=+1126.797807319" observedRunningTime="2026-03-09 18:43:10.099844063 +0000 UTC m=+1127.261219929" watchObservedRunningTime="2026-03-09 18:43:10.102043293 +0000 UTC m=+1127.263419139" Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.117499 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-584b867db4-vgt5b"] Mar 09 18:43:10 crc kubenswrapper[4821]: I0309 18:43:10.127688 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-584b867db4-vgt5b"] Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.429765 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.502866 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac7235a-20f5-458c-9d93-e7221cd8b83f-operator-scripts\") pod \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.502940 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swjbg\" (UniqueName: \"kubernetes.io/projected/5ac7235a-20f5-458c-9d93-e7221cd8b83f-kube-api-access-swjbg\") pod \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\" (UID: \"5ac7235a-20f5-458c-9d93-e7221cd8b83f\") " Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.503921 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ac7235a-20f5-458c-9d93-e7221cd8b83f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ac7235a-20f5-458c-9d93-e7221cd8b83f" (UID: "5ac7235a-20f5-458c-9d93-e7221cd8b83f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.508911 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.508956 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ac7235a-20f5-458c-9d93-e7221cd8b83f-kube-api-access-swjbg" (OuterVolumeSpecName: "kube-api-access-swjbg") pod "5ac7235a-20f5-458c-9d93-e7221cd8b83f" (UID: "5ac7235a-20f5-458c-9d93-e7221cd8b83f"). InnerVolumeSpecName "kube-api-access-swjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.543842 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.566598 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a749c63c-1f04-4955-9a98-fabbf677badc" path="/var/lib/kubelet/pods/a749c63c-1f04-4955-9a98-fabbf677badc/volumes" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.604530 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdvkd\" (UniqueName: \"kubernetes.io/projected/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-kube-api-access-fdvkd\") pod \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.604792 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2l5z\" (UniqueName: \"kubernetes.io/projected/36dce1ce-dd03-42be-a792-5e198c405b1b-kube-api-access-c2l5z\") pod \"36dce1ce-dd03-42be-a792-5e198c405b1b\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.604935 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-operator-scripts\") pod \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\" (UID: \"3cd3f65f-04e8-4e03-916b-9fa01bed65f5\") " Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.605073 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36dce1ce-dd03-42be-a792-5e198c405b1b-operator-scripts\") pod \"36dce1ce-dd03-42be-a792-5e198c405b1b\" (UID: \"36dce1ce-dd03-42be-a792-5e198c405b1b\") " Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.605463 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36dce1ce-dd03-42be-a792-5e198c405b1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36dce1ce-dd03-42be-a792-5e198c405b1b" (UID: "36dce1ce-dd03-42be-a792-5e198c405b1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.605488 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cd3f65f-04e8-4e03-916b-9fa01bed65f5" (UID: "3cd3f65f-04e8-4e03-916b-9fa01bed65f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.605981 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ac7235a-20f5-458c-9d93-e7221cd8b83f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.606090 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swjbg\" (UniqueName: \"kubernetes.io/projected/5ac7235a-20f5-458c-9d93-e7221cd8b83f-kube-api-access-swjbg\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.606176 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.606258 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36dce1ce-dd03-42be-a792-5e198c405b1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.607360 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-kube-api-access-fdvkd" (OuterVolumeSpecName: "kube-api-access-fdvkd") pod "3cd3f65f-04e8-4e03-916b-9fa01bed65f5" (UID: "3cd3f65f-04e8-4e03-916b-9fa01bed65f5"). InnerVolumeSpecName "kube-api-access-fdvkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.607887 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36dce1ce-dd03-42be-a792-5e198c405b1b-kube-api-access-c2l5z" (OuterVolumeSpecName: "kube-api-access-c2l5z") pod "36dce1ce-dd03-42be-a792-5e198c405b1b" (UID: "36dce1ce-dd03-42be-a792-5e198c405b1b"). InnerVolumeSpecName "kube-api-access-c2l5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.707798 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdvkd\" (UniqueName: \"kubernetes.io/projected/3cd3f65f-04e8-4e03-916b-9fa01bed65f5-kube-api-access-fdvkd\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:11 crc kubenswrapper[4821]: I0309 18:43:11.707833 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2l5z\" (UniqueName: \"kubernetes.io/projected/36dce1ce-dd03-42be-a792-5e198c405b1b-kube-api-access-c2l5z\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.072078 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" event={"ID":"5ac7235a-20f5-458c-9d93-e7221cd8b83f","Type":"ContainerDied","Data":"71da46d67b0205a3e7944f14932b72936b76daf47e3e3ea3428701ad37cb0cb7"} Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.072111 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq" Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.072124 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71da46d67b0205a3e7944f14932b72936b76daf47e3e3ea3428701ad37cb0cb7" Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.073626 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-h56pf" event={"ID":"36dce1ce-dd03-42be-a792-5e198c405b1b","Type":"ContainerDied","Data":"8047c9c070110ac349761999a290de39b04db1ec19d5404cede8f322f5757876"} Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.073674 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8047c9c070110ac349761999a290de39b04db1ec19d5404cede8f322f5757876" Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.073749 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-h56pf" Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.075010 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-vtbsg" event={"ID":"3cd3f65f-04e8-4e03-916b-9fa01bed65f5","Type":"ContainerDied","Data":"323e720b99a0ed4537df56c1c00578f0d5e607501317e59695d6cd0f938bb402"} Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.075035 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323e720b99a0ed4537df56c1c00578f0d5e607501317e59695d6cd0f938bb402" Mar 09 18:43:12 crc kubenswrapper[4821]: I0309 18:43:12.075064 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-vtbsg" Mar 09 18:43:14 crc kubenswrapper[4821]: I0309 18:43:14.817979 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:14 crc kubenswrapper[4821]: I0309 18:43:14.818484 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:14 crc kubenswrapper[4821]: I0309 18:43:14.822171 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:15 crc kubenswrapper[4821]: I0309 18:43:15.099167 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:17 crc kubenswrapper[4821]: I0309 18:43:17.418153 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:43:17 crc kubenswrapper[4821]: I0309 18:43:17.418513 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="prometheus" containerID="cri-o://7eb24db081b703d9c289b75b1749c9b9d0e92c4171eb6f6aee81d7d2b9cd32aa" gracePeriod=600 Mar 09 18:43:17 crc kubenswrapper[4821]: I0309 18:43:17.418629 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="config-reloader" containerID="cri-o://9b80a265cda94da3dd0b207ba9770ed04a1cee94b40887deb99e7551a06f983f" gracePeriod=600 Mar 09 18:43:17 crc kubenswrapper[4821]: I0309 18:43:17.418643 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="thanos-sidecar" containerID="cri-o://84f1c6cb337c6a947e82328fedd8b4d7ff224678d6d42cf3ce52ec54db167910" gracePeriod=600 Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.129247 4821 generic.go:334] "Generic (PLEG): container finished" podID="b4cf48ce-38c9-4dd4-b712-311a92dd29b6" containerID="4552822487d03a0b1d6b871c99a056521b93a3edd1df345423b973c8de8907fd" exitCode=0 Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.129383 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"b4cf48ce-38c9-4dd4-b712-311a92dd29b6","Type":"ContainerDied","Data":"4552822487d03a0b1d6b871c99a056521b93a3edd1df345423b973c8de8907fd"} Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.141229 4821 generic.go:334] "Generic (PLEG): container finished" podID="cdbbf791-a981-4585-a944-863d0e1cc847" containerID="84f1c6cb337c6a947e82328fedd8b4d7ff224678d6d42cf3ce52ec54db167910" exitCode=0 Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.141286 4821 generic.go:334] "Generic (PLEG): container finished" podID="cdbbf791-a981-4585-a944-863d0e1cc847" containerID="9b80a265cda94da3dd0b207ba9770ed04a1cee94b40887deb99e7551a06f983f" exitCode=0 Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.141303 4821 generic.go:334] "Generic (PLEG): container finished" podID="cdbbf791-a981-4585-a944-863d0e1cc847" containerID="7eb24db081b703d9c289b75b1749c9b9d0e92c4171eb6f6aee81d7d2b9cd32aa" exitCode=0 Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.141343 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerDied","Data":"84f1c6cb337c6a947e82328fedd8b4d7ff224678d6d42cf3ce52ec54db167910"} Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.141412 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerDied","Data":"9b80a265cda94da3dd0b207ba9770ed04a1cee94b40887deb99e7551a06f983f"} Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.141433 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerDied","Data":"7eb24db081b703d9c289b75b1749c9b9d0e92c4171eb6f6aee81d7d2b9cd32aa"} Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.386416 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521066 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-config\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521131 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t89pz\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-kube-api-access-t89pz\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521196 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-2\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521226 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-tls-assets\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521271 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-1\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521304 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-thanos-prometheus-http-client-file\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521409 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521434 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-0\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521462 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cdbbf791-a981-4585-a944-863d0e1cc847-config-out\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.521503 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-web-config\") pod \"cdbbf791-a981-4585-a944-863d0e1cc847\" (UID: \"cdbbf791-a981-4585-a944-863d0e1cc847\") " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.522394 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.522579 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.522621 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.526117 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-config" (OuterVolumeSpecName: "config") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.526674 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.527776 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-kube-api-access-t89pz" (OuterVolumeSpecName: "kube-api-access-t89pz") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "kube-api-access-t89pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.528251 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdbbf791-a981-4585-a944-863d0e1cc847-config-out" (OuterVolumeSpecName: "config-out") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.531110 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.547990 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.553445 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-web-config" (OuterVolumeSpecName: "web-config") pod "cdbbf791-a981-4585-a944-863d0e1cc847" (UID: "cdbbf791-a981-4585-a944-863d0e1cc847"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623214 4821 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623275 4821 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") on node \"crc\" " Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623290 4821 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623300 4821 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cdbbf791-a981-4585-a944-863d0e1cc847-config-out\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623313 4821 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-web-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623338 4821 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cdbbf791-a981-4585-a944-863d0e1cc847-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623349 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t89pz\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-kube-api-access-t89pz\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623359 4821 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623368 4821 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cdbbf791-a981-4585-a944-863d0e1cc847-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.623380 4821 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cdbbf791-a981-4585-a944-863d0e1cc847-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.639221 4821 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.639425 4821 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1") on node "crc" Mar 09 18:43:18 crc kubenswrapper[4821]: I0309 18:43:18.724503 4821 reconciler_common.go:293] "Volume detached for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.151394 4821 generic.go:334] "Generic (PLEG): container finished" podID="ace06b27-8092-4676-9bae-4df7c1044b98" containerID="434f5b41fdd23c8e5398b4cd3acd47acb695c64125051932930cf77df2648f1a" exitCode=0 Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.151434 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"ace06b27-8092-4676-9bae-4df7c1044b98","Type":"ContainerDied","Data":"434f5b41fdd23c8e5398b4cd3acd47acb695c64125051932930cf77df2648f1a"} Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.154168 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"b4cf48ce-38c9-4dd4-b712-311a92dd29b6","Type":"ContainerStarted","Data":"68cd51377cb6c3882e686292d5c0c5f6b1c61dd3e999c5c5b9e32a16bb78d3aa"} Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.154591 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.160863 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cdbbf791-a981-4585-a944-863d0e1cc847","Type":"ContainerDied","Data":"37024c368e30bc7b681e207c07c86b14cf67c05e2d442d370250d16c3d271046"} Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.160944 4821 scope.go:117] "RemoveContainer" containerID="84f1c6cb337c6a947e82328fedd8b4d7ff224678d6d42cf3ce52ec54db167910" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.160953 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.212195 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=38.587416523 podStartE2EDuration="55.21216887s" podCreationTimestamp="2026-03-09 18:42:24 +0000 UTC" firstStartedPulling="2026-03-09 18:42:26.569752461 +0000 UTC m=+1083.731128317" lastFinishedPulling="2026-03-09 18:42:43.194504808 +0000 UTC m=+1100.355880664" observedRunningTime="2026-03-09 18:43:19.208014307 +0000 UTC m=+1136.369390163" watchObservedRunningTime="2026-03-09 18:43:19.21216887 +0000 UTC m=+1136.373544746" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.269468 4821 scope.go:117] "RemoveContainer" containerID="9b80a265cda94da3dd0b207ba9770ed04a1cee94b40887deb99e7551a06f983f" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.287904 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.296723 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311189 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311664 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dce1ce-dd03-42be-a792-5e198c405b1b" containerName="mariadb-account-create-update" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311681 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dce1ce-dd03-42be-a792-5e198c405b1b" containerName="mariadb-account-create-update" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311708 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a749c63c-1f04-4955-9a98-fabbf677badc" containerName="console" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311733 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a749c63c-1f04-4955-9a98-fabbf677badc" containerName="console" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311757 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="init-config-reloader" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311763 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="init-config-reloader" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311778 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ac7235a-20f5-458c-9d93-e7221cd8b83f" containerName="mariadb-account-create-update" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311785 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ac7235a-20f5-458c-9d93-e7221cd8b83f" containerName="mariadb-account-create-update" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311819 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="prometheus" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311828 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="prometheus" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311841 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="config-reloader" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311847 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="config-reloader" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311860 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="thanos-sidecar" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311866 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="thanos-sidecar" Mar 09 18:43:19 crc kubenswrapper[4821]: E0309 18:43:19.311895 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cd3f65f-04e8-4e03-916b-9fa01bed65f5" containerName="mariadb-database-create" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.311901 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cd3f65f-04e8-4e03-916b-9fa01bed65f5" containerName="mariadb-database-create" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312075 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="thanos-sidecar" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312084 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a749c63c-1f04-4955-9a98-fabbf677badc" containerName="console" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312093 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ac7235a-20f5-458c-9d93-e7221cd8b83f" containerName="mariadb-account-create-update" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312126 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="prometheus" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312137 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cd3f65f-04e8-4e03-916b-9fa01bed65f5" containerName="mariadb-database-create" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312144 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dce1ce-dd03-42be-a792-5e198c405b1b" containerName="mariadb-account-create-update" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.312156 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" containerName="config-reloader" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.313994 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.315817 4821 scope.go:117] "RemoveContainer" containerID="7eb24db081b703d9c289b75b1749c9b9d0e92c4171eb6f6aee81d7d2b9cd32aa" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.317446 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.319154 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.319286 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.319427 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.319595 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.319678 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.319926 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.320129 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-6hjhr" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.334113 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.337003 4821 scope.go:117] "RemoveContainer" containerID="5ea763e93eadf760795016d19da9b0593c6a7cbe992c6c7bf8dd2269b94a11a0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.353276 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.436309 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.436453 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.436714 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.436793 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.436913 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.436967 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437010 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437123 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437160 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84lb\" (UniqueName: \"kubernetes.io/projected/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-kube-api-access-z84lb\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437351 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437388 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437415 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.437438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538381 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538437 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538469 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538491 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84lb\" (UniqueName: \"kubernetes.io/projected/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-kube-api-access-z84lb\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538565 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538580 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538596 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538612 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538640 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538661 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538682 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538704 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.538740 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.539996 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.540203 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.540946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.545052 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.546106 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-config\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.546174 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.549790 4821 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.549819 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.549840 4821 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9704a7b49b6380f59d1f734f97de4161168b5e073a0a5af270f11b899d130ccd/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.550357 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.553742 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.553898 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.555001 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.562883 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84lb\" (UniqueName: \"kubernetes.io/projected/cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f-kube-api-access-z84lb\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.573623 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbbf791-a981-4585-a944-863d0e1cc847" path="/var/lib/kubelet/pods/cdbbf791-a981-4585-a944-863d0e1cc847/volumes" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.595605 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f121f351-6ece-43c2-9d0a-9d86eccbd4c1\") pod \"prometheus-metric-storage-0\" (UID: \"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:19 crc kubenswrapper[4821]: I0309 18:43:19.634030 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:20 crc kubenswrapper[4821]: I0309 18:43:20.179096 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"ace06b27-8092-4676-9bae-4df7c1044b98","Type":"ContainerStarted","Data":"336ed90c96a05322551cea4065aee23b807725566da2ac1222f133ceb80b51b1"} Mar 09 18:43:20 crc kubenswrapper[4821]: I0309 18:43:20.180240 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:43:20 crc kubenswrapper[4821]: I0309 18:43:20.195721 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Mar 09 18:43:20 crc kubenswrapper[4821]: I0309 18:43:20.242434 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=39.49788697 podStartE2EDuration="56.242417625s" podCreationTimestamp="2026-03-09 18:42:24 +0000 UTC" firstStartedPulling="2026-03-09 18:42:27.111028644 +0000 UTC m=+1084.272404520" lastFinishedPulling="2026-03-09 18:42:43.855559329 +0000 UTC m=+1101.016935175" observedRunningTime="2026-03-09 18:43:20.23341578 +0000 UTC m=+1137.394791636" watchObservedRunningTime="2026-03-09 18:43:20.242417625 +0000 UTC m=+1137.403793481" Mar 09 18:43:21 crc kubenswrapper[4821]: I0309 18:43:21.189335 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f","Type":"ContainerStarted","Data":"ff278bb3c7182f0e90e210788212fe4e3c01078a7050f79a450a219848c75131"} Mar 09 18:43:23 crc kubenswrapper[4821]: I0309 18:43:23.203978 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f","Type":"ContainerStarted","Data":"9fbec44d092a158f4a06fd4a21e0f7c801b83e289ed3c6e13346ca62c70ea7e9"} Mar 09 18:43:29 crc kubenswrapper[4821]: I0309 18:43:29.649513 4821 generic.go:334] "Generic (PLEG): container finished" podID="cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f" containerID="9fbec44d092a158f4a06fd4a21e0f7c801b83e289ed3c6e13346ca62c70ea7e9" exitCode=0 Mar 09 18:43:29 crc kubenswrapper[4821]: I0309 18:43:29.649593 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f","Type":"ContainerDied","Data":"9fbec44d092a158f4a06fd4a21e0f7c801b83e289ed3c6e13346ca62c70ea7e9"} Mar 09 18:43:29 crc kubenswrapper[4821]: I0309 18:43:29.913678 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:43:29 crc kubenswrapper[4821]: I0309 18:43:29.913995 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:43:30 crc kubenswrapper[4821]: I0309 18:43:30.658469 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f","Type":"ContainerStarted","Data":"65792699229adfd2059f88335ff19701d861da342649ba0f9a97b458b8224b0e"} Mar 09 18:43:32 crc kubenswrapper[4821]: I0309 18:43:32.674652 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f","Type":"ContainerStarted","Data":"ed02fbff642af04dc1d8973f9235b84d30a237b91f060cc50fcda1afca5156ee"} Mar 09 18:43:33 crc kubenswrapper[4821]: I0309 18:43:33.685718 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f","Type":"ContainerStarted","Data":"42bb1218139f42274b3fbac2f9638bacd9f91e3287668a44a92e35607c14eba5"} Mar 09 18:43:34 crc kubenswrapper[4821]: I0309 18:43:34.634192 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:34 crc kubenswrapper[4821]: I0309 18:43:34.634267 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:34 crc kubenswrapper[4821]: I0309 18:43:34.641907 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:34 crc kubenswrapper[4821]: I0309 18:43:34.670704 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=15.670685655 podStartE2EDuration="15.670685655s" podCreationTimestamp="2026-03-09 18:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:43:33.723927414 +0000 UTC m=+1150.885303350" watchObservedRunningTime="2026-03-09 18:43:34.670685655 +0000 UTC m=+1151.832061521" Mar 09 18:43:34 crc kubenswrapper[4821]: I0309 18:43:34.697205 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Mar 09 18:43:36 crc kubenswrapper[4821]: I0309 18:43:36.080518 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Mar 09 18:43:36 crc kubenswrapper[4821]: I0309 18:43:36.672535 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.066652 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-zxc2d"] Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.067951 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.070468 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-rlbmp" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.070569 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.072638 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.075635 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.081143 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-zxc2d"] Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.122357 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-combined-ca-bundle\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.122444 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-config-data\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.122558 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmrw\" (UniqueName: \"kubernetes.io/projected/a4e8000e-e15d-4b86-8a92-9c35d297c60b-kube-api-access-wlmrw\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.224102 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-config-data\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.224208 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmrw\" (UniqueName: \"kubernetes.io/projected/a4e8000e-e15d-4b86-8a92-9c35d297c60b-kube-api-access-wlmrw\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.224256 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-combined-ca-bundle\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.230198 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-combined-ca-bundle\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.230342 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-config-data\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.238694 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmrw\" (UniqueName: \"kubernetes.io/projected/a4e8000e-e15d-4b86-8a92-9c35d297c60b-kube-api-access-wlmrw\") pod \"keystone-db-sync-zxc2d\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.383012 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:38 crc kubenswrapper[4821]: I0309 18:43:38.854124 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-zxc2d"] Mar 09 18:43:39 crc kubenswrapper[4821]: I0309 18:43:39.738270 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" event={"ID":"a4e8000e-e15d-4b86-8a92-9c35d297c60b","Type":"ContainerStarted","Data":"64dc0fe3993c6859df4d86eb303149b26847065cebf23549fe9acc1dc3d77cf0"} Mar 09 18:43:46 crc kubenswrapper[4821]: I0309 18:43:46.794291 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" event={"ID":"a4e8000e-e15d-4b86-8a92-9c35d297c60b","Type":"ContainerStarted","Data":"50dc4a4d31a7953caf05ae63a28d63d96c3cd8fc307165ba7aa0e31fe872643a"} Mar 09 18:43:46 crc kubenswrapper[4821]: I0309 18:43:46.816505 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" podStartSLOduration=1.562609876 podStartE2EDuration="8.816488514s" podCreationTimestamp="2026-03-09 18:43:38 +0000 UTC" firstStartedPulling="2026-03-09 18:43:38.867928173 +0000 UTC m=+1156.029304039" lastFinishedPulling="2026-03-09 18:43:46.121806821 +0000 UTC m=+1163.283182677" observedRunningTime="2026-03-09 18:43:46.816047813 +0000 UTC m=+1163.977423689" watchObservedRunningTime="2026-03-09 18:43:46.816488514 +0000 UTC m=+1163.977864370" Mar 09 18:43:49 crc kubenswrapper[4821]: I0309 18:43:49.819410 4821 generic.go:334] "Generic (PLEG): container finished" podID="a4e8000e-e15d-4b86-8a92-9c35d297c60b" containerID="50dc4a4d31a7953caf05ae63a28d63d96c3cd8fc307165ba7aa0e31fe872643a" exitCode=0 Mar 09 18:43:49 crc kubenswrapper[4821]: I0309 18:43:49.819445 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" event={"ID":"a4e8000e-e15d-4b86-8a92-9c35d297c60b","Type":"ContainerDied","Data":"50dc4a4d31a7953caf05ae63a28d63d96c3cd8fc307165ba7aa0e31fe872643a"} Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.297538 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.343596 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-config-data\") pod \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.343646 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmrw\" (UniqueName: \"kubernetes.io/projected/a4e8000e-e15d-4b86-8a92-9c35d297c60b-kube-api-access-wlmrw\") pod \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.343690 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-combined-ca-bundle\") pod \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\" (UID: \"a4e8000e-e15d-4b86-8a92-9c35d297c60b\") " Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.357553 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e8000e-e15d-4b86-8a92-9c35d297c60b-kube-api-access-wlmrw" (OuterVolumeSpecName: "kube-api-access-wlmrw") pod "a4e8000e-e15d-4b86-8a92-9c35d297c60b" (UID: "a4e8000e-e15d-4b86-8a92-9c35d297c60b"). InnerVolumeSpecName "kube-api-access-wlmrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.379035 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4e8000e-e15d-4b86-8a92-9c35d297c60b" (UID: "a4e8000e-e15d-4b86-8a92-9c35d297c60b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.393261 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-config-data" (OuterVolumeSpecName: "config-data") pod "a4e8000e-e15d-4b86-8a92-9c35d297c60b" (UID: "a4e8000e-e15d-4b86-8a92-9c35d297c60b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.444433 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.444460 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlmrw\" (UniqueName: \"kubernetes.io/projected/a4e8000e-e15d-4b86-8a92-9c35d297c60b-kube-api-access-wlmrw\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.444473 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e8000e-e15d-4b86-8a92-9c35d297c60b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.839955 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" event={"ID":"a4e8000e-e15d-4b86-8a92-9c35d297c60b","Type":"ContainerDied","Data":"64dc0fe3993c6859df4d86eb303149b26847065cebf23549fe9acc1dc3d77cf0"} Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.840253 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64dc0fe3993c6859df4d86eb303149b26847065cebf23549fe9acc1dc3d77cf0" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.840399 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-zxc2d" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.999532 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-9mtck"] Mar 09 18:43:51 crc kubenswrapper[4821]: E0309 18:43:51.999958 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e8000e-e15d-4b86-8a92-9c35d297c60b" containerName="keystone-db-sync" Mar 09 18:43:51 crc kubenswrapper[4821]: I0309 18:43:51.999980 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e8000e-e15d-4b86-8a92-9c35d297c60b" containerName="keystone-db-sync" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.000178 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e8000e-e15d-4b86-8a92-9c35d297c60b" containerName="keystone-db-sync" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.002177 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.005176 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.005206 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.005464 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.005497 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.005535 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-rlbmp" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.011522 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-9mtck"] Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.062984 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-config-data\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.063054 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vtr\" (UniqueName: \"kubernetes.io/projected/9a17d1cc-a29b-464e-a85f-fa5469a7a683-kube-api-access-84vtr\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.063094 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-combined-ca-bundle\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.063154 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-scripts\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.063193 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-credential-keys\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.063218 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-fernet-keys\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.164516 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-combined-ca-bundle\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.164893 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-scripts\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.165010 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-credential-keys\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.165108 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-fernet-keys\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.165272 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-config-data\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.165413 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84vtr\" (UniqueName: \"kubernetes.io/projected/9a17d1cc-a29b-464e-a85f-fa5469a7a683-kube-api-access-84vtr\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.171156 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-config-data\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.172410 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-scripts\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.173005 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-combined-ca-bundle\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.173427 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-credential-keys\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.180874 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-fernet-keys\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.190711 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.192551 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.194704 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.196052 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.196185 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84vtr\" (UniqueName: \"kubernetes.io/projected/9a17d1cc-a29b-464e-a85f-fa5469a7a683-kube-api-access-84vtr\") pod \"keystone-bootstrap-9mtck\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.219639 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.266754 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-config-data\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.266826 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-run-httpd\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.266892 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-log-httpd\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.266962 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.266981 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.267002 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpqr4\" (UniqueName: \"kubernetes.io/projected/e802c6eb-1a02-457f-9abf-daef4328992f-kube-api-access-kpqr4\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.267066 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-scripts\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.356426 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.368688 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-run-httpd\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.368924 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-log-httpd\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.368980 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.368996 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.369016 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpqr4\" (UniqueName: \"kubernetes.io/projected/e802c6eb-1a02-457f-9abf-daef4328992f-kube-api-access-kpqr4\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.369066 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-scripts\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.369109 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-config-data\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.369594 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-log-httpd\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.369916 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-run-httpd\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.373359 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.377798 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.385013 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-scripts\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.386153 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-config-data\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.394127 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpqr4\" (UniqueName: \"kubernetes.io/projected/e802c6eb-1a02-457f-9abf-daef4328992f-kube-api-access-kpqr4\") pod \"ceilometer-0\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:52 crc kubenswrapper[4821]: I0309 18:43:52.553681 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:43:53 crc kubenswrapper[4821]: I0309 18:43:52.989634 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-9mtck"] Mar 09 18:43:53 crc kubenswrapper[4821]: W0309 18:43:52.991551 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a17d1cc_a29b_464e_a85f_fa5469a7a683.slice/crio-9be37505c31434ba077e44574f6b4f33e442e7b6667c6178a3a7957628cc695b WatchSource:0}: Error finding container 9be37505c31434ba077e44574f6b4f33e442e7b6667c6178a3a7957628cc695b: Status 404 returned error can't find the container with id 9be37505c31434ba077e44574f6b4f33e442e7b6667c6178a3a7957628cc695b Mar 09 18:43:53 crc kubenswrapper[4821]: I0309 18:43:53.772787 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:43:53 crc kubenswrapper[4821]: I0309 18:43:53.888449 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerStarted","Data":"1c7b28e873a61cf09e334a65dd36e21021add3af9b94f295856aab81578efe6f"} Mar 09 18:43:53 crc kubenswrapper[4821]: I0309 18:43:53.890584 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" event={"ID":"9a17d1cc-a29b-464e-a85f-fa5469a7a683","Type":"ContainerStarted","Data":"23a82451109136fa823272bdf003f710ca00199325843e11801d679ed0fb5eb0"} Mar 09 18:43:53 crc kubenswrapper[4821]: I0309 18:43:53.890623 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" event={"ID":"9a17d1cc-a29b-464e-a85f-fa5469a7a683","Type":"ContainerStarted","Data":"9be37505c31434ba077e44574f6b4f33e442e7b6667c6178a3a7957628cc695b"} Mar 09 18:43:53 crc kubenswrapper[4821]: I0309 18:43:53.908242 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" podStartSLOduration=2.908221844 podStartE2EDuration="2.908221844s" podCreationTimestamp="2026-03-09 18:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:43:53.906483047 +0000 UTC m=+1171.067858933" watchObservedRunningTime="2026-03-09 18:43:53.908221844 +0000 UTC m=+1171.069597700" Mar 09 18:43:54 crc kubenswrapper[4821]: I0309 18:43:54.371339 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:43:56 crc kubenswrapper[4821]: I0309 18:43:56.933578 4821 generic.go:334] "Generic (PLEG): container finished" podID="9a17d1cc-a29b-464e-a85f-fa5469a7a683" containerID="23a82451109136fa823272bdf003f710ca00199325843e11801d679ed0fb5eb0" exitCode=0 Mar 09 18:43:56 crc kubenswrapper[4821]: I0309 18:43:56.933889 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" event={"ID":"9a17d1cc-a29b-464e-a85f-fa5469a7a683","Type":"ContainerDied","Data":"23a82451109136fa823272bdf003f710ca00199325843e11801d679ed0fb5eb0"} Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.265878 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.404533 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84vtr\" (UniqueName: \"kubernetes.io/projected/9a17d1cc-a29b-464e-a85f-fa5469a7a683-kube-api-access-84vtr\") pod \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.404612 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-credential-keys\") pod \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.404660 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-combined-ca-bundle\") pod \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.404732 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-fernet-keys\") pod \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.404749 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-scripts\") pod \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.404809 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-config-data\") pod \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\" (UID: \"9a17d1cc-a29b-464e-a85f-fa5469a7a683\") " Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.409426 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9a17d1cc-a29b-464e-a85f-fa5469a7a683" (UID: "9a17d1cc-a29b-464e-a85f-fa5469a7a683"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.410182 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9a17d1cc-a29b-464e-a85f-fa5469a7a683" (UID: "9a17d1cc-a29b-464e-a85f-fa5469a7a683"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.410406 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-scripts" (OuterVolumeSpecName: "scripts") pod "9a17d1cc-a29b-464e-a85f-fa5469a7a683" (UID: "9a17d1cc-a29b-464e-a85f-fa5469a7a683"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.410512 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a17d1cc-a29b-464e-a85f-fa5469a7a683-kube-api-access-84vtr" (OuterVolumeSpecName: "kube-api-access-84vtr") pod "9a17d1cc-a29b-464e-a85f-fa5469a7a683" (UID: "9a17d1cc-a29b-464e-a85f-fa5469a7a683"). InnerVolumeSpecName "kube-api-access-84vtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.426501 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a17d1cc-a29b-464e-a85f-fa5469a7a683" (UID: "9a17d1cc-a29b-464e-a85f-fa5469a7a683"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.426564 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-config-data" (OuterVolumeSpecName: "config-data") pod "9a17d1cc-a29b-464e-a85f-fa5469a7a683" (UID: "9a17d1cc-a29b-464e-a85f-fa5469a7a683"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.506370 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.506428 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.506438 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.506447 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.506456 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84vtr\" (UniqueName: \"kubernetes.io/projected/9a17d1cc-a29b-464e-a85f-fa5469a7a683-kube-api-access-84vtr\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.506466 4821 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a17d1cc-a29b-464e-a85f-fa5469a7a683-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.953335 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerStarted","Data":"db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0"} Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.955271 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" event={"ID":"9a17d1cc-a29b-464e-a85f-fa5469a7a683","Type":"ContainerDied","Data":"9be37505c31434ba077e44574f6b4f33e442e7b6667c6178a3a7957628cc695b"} Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.955372 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9be37505c31434ba077e44574f6b4f33e442e7b6667c6178a3a7957628cc695b" Mar 09 18:43:58 crc kubenswrapper[4821]: I0309 18:43:58.955481 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-9mtck" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.030465 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-9mtck"] Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.037857 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-9mtck"] Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.122915 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vmfkf"] Mar 09 18:43:59 crc kubenswrapper[4821]: E0309 18:43:59.123222 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a17d1cc-a29b-464e-a85f-fa5469a7a683" containerName="keystone-bootstrap" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.123238 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a17d1cc-a29b-464e-a85f-fa5469a7a683" containerName="keystone-bootstrap" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.123464 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a17d1cc-a29b-464e-a85f-fa5469a7a683" containerName="keystone-bootstrap" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.124074 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.128945 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.129143 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.129382 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-rlbmp" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.129492 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.129498 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.139585 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vmfkf"] Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.217000 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-config-data\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.217059 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-combined-ca-bundle\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.217130 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-credential-keys\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.217255 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xspd9\" (UniqueName: \"kubernetes.io/projected/683bdf8d-e740-47ae-92b0-cf247536c80d-kube-api-access-xspd9\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.217299 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-fernet-keys\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.217472 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-scripts\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.320262 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-config-data\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.320359 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-combined-ca-bundle\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.320428 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-credential-keys\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.320506 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xspd9\" (UniqueName: \"kubernetes.io/projected/683bdf8d-e740-47ae-92b0-cf247536c80d-kube-api-access-xspd9\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.320530 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-fernet-keys\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.320567 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-scripts\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.334807 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-credential-keys\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.335066 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-combined-ca-bundle\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.335110 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-scripts\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.335227 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-fernet-keys\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.335247 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-config-data\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.343202 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xspd9\" (UniqueName: \"kubernetes.io/projected/683bdf8d-e740-47ae-92b0-cf247536c80d-kube-api-access-xspd9\") pod \"keystone-bootstrap-vmfkf\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.437137 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.567924 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a17d1cc-a29b-464e-a85f-fa5469a7a683" path="/var/lib/kubelet/pods/9a17d1cc-a29b-464e-a85f-fa5469a7a683/volumes" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.865859 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vmfkf"] Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.913295 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.913382 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.913416 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.914355 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:43:59 crc kubenswrapper[4821]: I0309 18:43:59.914406 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e" gracePeriod=600 Mar 09 18:44:00 crc kubenswrapper[4821]: E0309 18:44:00.020042 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3270571a_a484_4e66_8035_f43509b58add.slice/crio-7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e.scope\": RecentStats: unable to find data in memory cache]" Mar 09 18:44:00 crc kubenswrapper[4821]: W0309 18:44:00.024076 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683bdf8d_e740_47ae_92b0_cf247536c80d.slice/crio-30edf7c30ed5d5e0b86e3a2dfbf6308deb556b4a5f1e2889c440afc6b3ae0d1f WatchSource:0}: Error finding container 30edf7c30ed5d5e0b86e3a2dfbf6308deb556b4a5f1e2889c440afc6b3ae0d1f: Status 404 returned error can't find the container with id 30edf7c30ed5d5e0b86e3a2dfbf6308deb556b4a5f1e2889c440afc6b3ae0d1f Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.140095 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551364-fpxwg"] Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.141541 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.152506 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551364-fpxwg"] Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.153933 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.153961 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.154185 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.235702 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr24b\" (UniqueName: \"kubernetes.io/projected/16076fc5-6d60-45a7-a6f7-0110fa46bfa9-kube-api-access-rr24b\") pod \"auto-csr-approver-29551364-fpxwg\" (UID: \"16076fc5-6d60-45a7-a6f7-0110fa46bfa9\") " pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.337688 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr24b\" (UniqueName: \"kubernetes.io/projected/16076fc5-6d60-45a7-a6f7-0110fa46bfa9-kube-api-access-rr24b\") pod \"auto-csr-approver-29551364-fpxwg\" (UID: \"16076fc5-6d60-45a7-a6f7-0110fa46bfa9\") " pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.368517 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr24b\" (UniqueName: \"kubernetes.io/projected/16076fc5-6d60-45a7-a6f7-0110fa46bfa9-kube-api-access-rr24b\") pod \"auto-csr-approver-29551364-fpxwg\" (UID: \"16076fc5-6d60-45a7-a6f7-0110fa46bfa9\") " pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.619576 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.971402 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e" exitCode=0 Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.971705 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e"} Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.971730 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"f7abf213239bc2e4cfdfaa92f1f80f4d716d029c1e408736f60b7c4d3d559952"} Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.971744 4821 scope.go:117] "RemoveContainer" containerID="c46da8911503c236934f3f2a2bf1a46aa040191100207d7942fc6bf2c08ce6de" Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.976396 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerStarted","Data":"3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9"} Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.978069 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" event={"ID":"683bdf8d-e740-47ae-92b0-cf247536c80d","Type":"ContainerStarted","Data":"d26854201afe367d5084025dc07c8b58b6ef54b7cc6f9187bee5b482c8320949"} Mar 09 18:44:00 crc kubenswrapper[4821]: I0309 18:44:00.978098 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" event={"ID":"683bdf8d-e740-47ae-92b0-cf247536c80d","Type":"ContainerStarted","Data":"30edf7c30ed5d5e0b86e3a2dfbf6308deb556b4a5f1e2889c440afc6b3ae0d1f"} Mar 09 18:44:01 crc kubenswrapper[4821]: I0309 18:44:01.022699 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" podStartSLOduration=2.02267477 podStartE2EDuration="2.02267477s" podCreationTimestamp="2026-03-09 18:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:44:01.0153138 +0000 UTC m=+1178.176689656" watchObservedRunningTime="2026-03-09 18:44:01.02267477 +0000 UTC m=+1178.184050626" Mar 09 18:44:01 crc kubenswrapper[4821]: I0309 18:44:01.135516 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551364-fpxwg"] Mar 09 18:44:01 crc kubenswrapper[4821]: I0309 18:44:01.999495 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" event={"ID":"16076fc5-6d60-45a7-a6f7-0110fa46bfa9","Type":"ContainerStarted","Data":"d657a10df5b57b15d3062aec63e3ee29694fe066cbd54bec1007be16a1329d21"} Mar 09 18:44:03 crc kubenswrapper[4821]: I0309 18:44:03.019667 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" event={"ID":"16076fc5-6d60-45a7-a6f7-0110fa46bfa9","Type":"ContainerStarted","Data":"a492dbc866ea3fd3ea0a7835c8cb32f7ec6ca8c6d2f59278edecc78c6abcdbdd"} Mar 09 18:44:03 crc kubenswrapper[4821]: I0309 18:44:03.609669 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" podStartSLOduration=2.332792225 podStartE2EDuration="3.609641319s" podCreationTimestamp="2026-03-09 18:44:00 +0000 UTC" firstStartedPulling="2026-03-09 18:44:01.144508484 +0000 UTC m=+1178.305884340" lastFinishedPulling="2026-03-09 18:44:02.421357578 +0000 UTC m=+1179.582733434" observedRunningTime="2026-03-09 18:44:03.044558113 +0000 UTC m=+1180.205933959" watchObservedRunningTime="2026-03-09 18:44:03.609641319 +0000 UTC m=+1180.771017205" Mar 09 18:44:04 crc kubenswrapper[4821]: I0309 18:44:04.038390 4821 generic.go:334] "Generic (PLEG): container finished" podID="683bdf8d-e740-47ae-92b0-cf247536c80d" containerID="d26854201afe367d5084025dc07c8b58b6ef54b7cc6f9187bee5b482c8320949" exitCode=0 Mar 09 18:44:04 crc kubenswrapper[4821]: I0309 18:44:04.038466 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" event={"ID":"683bdf8d-e740-47ae-92b0-cf247536c80d","Type":"ContainerDied","Data":"d26854201afe367d5084025dc07c8b58b6ef54b7cc6f9187bee5b482c8320949"} Mar 09 18:44:04 crc kubenswrapper[4821]: I0309 18:44:04.051491 4821 generic.go:334] "Generic (PLEG): container finished" podID="16076fc5-6d60-45a7-a6f7-0110fa46bfa9" containerID="a492dbc866ea3fd3ea0a7835c8cb32f7ec6ca8c6d2f59278edecc78c6abcdbdd" exitCode=0 Mar 09 18:44:04 crc kubenswrapper[4821]: I0309 18:44:04.051555 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" event={"ID":"16076fc5-6d60-45a7-a6f7-0110fa46bfa9","Type":"ContainerDied","Data":"a492dbc866ea3fd3ea0a7835c8cb32f7ec6ca8c6d2f59278edecc78c6abcdbdd"} Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.034650 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.039692 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.100725 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.101096 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-vmfkf" event={"ID":"683bdf8d-e740-47ae-92b0-cf247536c80d","Type":"ContainerDied","Data":"30edf7c30ed5d5e0b86e3a2dfbf6308deb556b4a5f1e2889c440afc6b3ae0d1f"} Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.101125 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30edf7c30ed5d5e0b86e3a2dfbf6308deb556b4a5f1e2889c440afc6b3ae0d1f" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.103897 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" event={"ID":"16076fc5-6d60-45a7-a6f7-0110fa46bfa9","Type":"ContainerDied","Data":"d657a10df5b57b15d3062aec63e3ee29694fe066cbd54bec1007be16a1329d21"} Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.103923 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d657a10df5b57b15d3062aec63e3ee29694fe066cbd54bec1007be16a1329d21" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.103966 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551364-fpxwg" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.144782 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-combined-ca-bundle\") pod \"683bdf8d-e740-47ae-92b0-cf247536c80d\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.144872 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-fernet-keys\") pod \"683bdf8d-e740-47ae-92b0-cf247536c80d\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.144918 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-config-data\") pod \"683bdf8d-e740-47ae-92b0-cf247536c80d\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.144983 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-credential-keys\") pod \"683bdf8d-e740-47ae-92b0-cf247536c80d\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.145091 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xspd9\" (UniqueName: \"kubernetes.io/projected/683bdf8d-e740-47ae-92b0-cf247536c80d-kube-api-access-xspd9\") pod \"683bdf8d-e740-47ae-92b0-cf247536c80d\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.145128 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr24b\" (UniqueName: \"kubernetes.io/projected/16076fc5-6d60-45a7-a6f7-0110fa46bfa9-kube-api-access-rr24b\") pod \"16076fc5-6d60-45a7-a6f7-0110fa46bfa9\" (UID: \"16076fc5-6d60-45a7-a6f7-0110fa46bfa9\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.145163 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-scripts\") pod \"683bdf8d-e740-47ae-92b0-cf247536c80d\" (UID: \"683bdf8d-e740-47ae-92b0-cf247536c80d\") " Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.154081 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-scripts" (OuterVolumeSpecName: "scripts") pod "683bdf8d-e740-47ae-92b0-cf247536c80d" (UID: "683bdf8d-e740-47ae-92b0-cf247536c80d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.154588 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683bdf8d-e740-47ae-92b0-cf247536c80d-kube-api-access-xspd9" (OuterVolumeSpecName: "kube-api-access-xspd9") pod "683bdf8d-e740-47ae-92b0-cf247536c80d" (UID: "683bdf8d-e740-47ae-92b0-cf247536c80d"). InnerVolumeSpecName "kube-api-access-xspd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.160637 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16076fc5-6d60-45a7-a6f7-0110fa46bfa9-kube-api-access-rr24b" (OuterVolumeSpecName: "kube-api-access-rr24b") pod "16076fc5-6d60-45a7-a6f7-0110fa46bfa9" (UID: "16076fc5-6d60-45a7-a6f7-0110fa46bfa9"). InnerVolumeSpecName "kube-api-access-rr24b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.162595 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "683bdf8d-e740-47ae-92b0-cf247536c80d" (UID: "683bdf8d-e740-47ae-92b0-cf247536c80d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.166555 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "683bdf8d-e740-47ae-92b0-cf247536c80d" (UID: "683bdf8d-e740-47ae-92b0-cf247536c80d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.176918 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-7774c4794c-f24tn"] Mar 09 18:44:06 crc kubenswrapper[4821]: E0309 18:44:06.177283 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683bdf8d-e740-47ae-92b0-cf247536c80d" containerName="keystone-bootstrap" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.177300 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="683bdf8d-e740-47ae-92b0-cf247536c80d" containerName="keystone-bootstrap" Mar 09 18:44:06 crc kubenswrapper[4821]: E0309 18:44:06.177341 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16076fc5-6d60-45a7-a6f7-0110fa46bfa9" containerName="oc" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.177349 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="16076fc5-6d60-45a7-a6f7-0110fa46bfa9" containerName="oc" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.177482 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="16076fc5-6d60-45a7-a6f7-0110fa46bfa9" containerName="oc" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.177502 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="683bdf8d-e740-47ae-92b0-cf247536c80d" containerName="keystone-bootstrap" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.178113 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.183154 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.184108 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.190541 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7774c4794c-f24tn"] Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.191773 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "683bdf8d-e740-47ae-92b0-cf247536c80d" (UID: "683bdf8d-e740-47ae-92b0-cf247536c80d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.201755 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-config-data" (OuterVolumeSpecName: "config-data") pod "683bdf8d-e740-47ae-92b0-cf247536c80d" (UID: "683bdf8d-e740-47ae-92b0-cf247536c80d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250035 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-public-tls-certs\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250085 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ssk8\" (UniqueName: \"kubernetes.io/projected/486686dc-8137-45ed-a509-0f5d3ade5ffb-kube-api-access-4ssk8\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250115 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-internal-tls-certs\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250142 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-fernet-keys\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250198 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-combined-ca-bundle\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250233 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-scripts\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250266 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-credential-keys\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250297 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-config-data\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250374 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250386 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250394 4821 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250406 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xspd9\" (UniqueName: \"kubernetes.io/projected/683bdf8d-e740-47ae-92b0-cf247536c80d-kube-api-access-xspd9\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250417 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr24b\" (UniqueName: \"kubernetes.io/projected/16076fc5-6d60-45a7-a6f7-0110fa46bfa9-kube-api-access-rr24b\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250424 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.250468 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683bdf8d-e740-47ae-92b0-cf247536c80d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.251166 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.351978 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-internal-tls-certs\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.352345 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-fernet-keys\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.352543 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-combined-ca-bundle\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.352717 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-scripts\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.352845 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-credential-keys\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.352985 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-config-data\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.353103 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-public-tls-certs\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.353190 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ssk8\" (UniqueName: \"kubernetes.io/projected/486686dc-8137-45ed-a509-0f5d3ade5ffb-kube-api-access-4ssk8\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.356250 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-internal-tls-certs\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.356280 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-config-data\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.356338 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-fernet-keys\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.356614 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-scripts\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.360382 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-public-tls-certs\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.360378 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-credential-keys\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.360660 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-combined-ca-bundle\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.369737 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ssk8\" (UniqueName: \"kubernetes.io/projected/486686dc-8137-45ed-a509-0f5d3ade5ffb-kube-api-access-4ssk8\") pod \"keystone-7774c4794c-f24tn\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:06 crc kubenswrapper[4821]: I0309 18:44:06.511615 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:07 crc kubenswrapper[4821]: I0309 18:44:07.005424 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7774c4794c-f24tn"] Mar 09 18:44:07 crc kubenswrapper[4821]: I0309 18:44:07.127707 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" event={"ID":"486686dc-8137-45ed-a509-0f5d3ade5ffb","Type":"ContainerStarted","Data":"95933fa783de2f9ce4f19f650eb21adcb07b24a0caadab4853535eae2d7653bd"} Mar 09 18:44:07 crc kubenswrapper[4821]: I0309 18:44:07.131297 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerStarted","Data":"5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0"} Mar 09 18:44:07 crc kubenswrapper[4821]: I0309 18:44:07.146070 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551358-sb9b7"] Mar 09 18:44:07 crc kubenswrapper[4821]: I0309 18:44:07.154156 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551358-sb9b7"] Mar 09 18:44:07 crc kubenswrapper[4821]: I0309 18:44:07.563659 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53bcb16a-e06a-4552-aa19-dca354931cee" path="/var/lib/kubelet/pods/53bcb16a-e06a-4552-aa19-dca354931cee/volumes" Mar 09 18:44:08 crc kubenswrapper[4821]: I0309 18:44:08.140067 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" event={"ID":"486686dc-8137-45ed-a509-0f5d3ade5ffb","Type":"ContainerStarted","Data":"8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea"} Mar 09 18:44:08 crc kubenswrapper[4821]: I0309 18:44:08.141007 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.217048 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerStarted","Data":"e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174"} Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.217938 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.217222 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="proxy-httpd" containerID="cri-o://e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174" gracePeriod=30 Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.217240 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="sg-core" containerID="cri-o://5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0" gracePeriod=30 Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.217253 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-notification-agent" containerID="cri-o://3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9" gracePeriod=30 Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.217171 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-central-agent" containerID="cri-o://db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0" gracePeriod=30 Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.249664 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.6046697439999997 podStartE2EDuration="24.249638603s" podCreationTimestamp="2026-03-09 18:43:52 +0000 UTC" firstStartedPulling="2026-03-09 18:43:53.77573364 +0000 UTC m=+1170.937109496" lastFinishedPulling="2026-03-09 18:44:15.420702499 +0000 UTC m=+1192.582078355" observedRunningTime="2026-03-09 18:44:16.244254447 +0000 UTC m=+1193.405630313" watchObservedRunningTime="2026-03-09 18:44:16.249638603 +0000 UTC m=+1193.411014459" Mar 09 18:44:16 crc kubenswrapper[4821]: I0309 18:44:16.253272 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" podStartSLOduration=10.253256432 podStartE2EDuration="10.253256432s" podCreationTimestamp="2026-03-09 18:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:44:08.167800807 +0000 UTC m=+1185.329176693" watchObservedRunningTime="2026-03-09 18:44:16.253256432 +0000 UTC m=+1193.414632288" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.230595 4821 generic.go:334] "Generic (PLEG): container finished" podID="e802c6eb-1a02-457f-9abf-daef4328992f" containerID="e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174" exitCode=0 Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.230879 4821 generic.go:334] "Generic (PLEG): container finished" podID="e802c6eb-1a02-457f-9abf-daef4328992f" containerID="5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0" exitCode=2 Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.230887 4821 generic.go:334] "Generic (PLEG): container finished" podID="e802c6eb-1a02-457f-9abf-daef4328992f" containerID="db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0" exitCode=0 Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.230684 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerDied","Data":"e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174"} Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.230926 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerDied","Data":"5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0"} Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.230941 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerDied","Data":"db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0"} Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.616520 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.759825 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-config-data\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.759919 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-log-httpd\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.759962 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-run-httpd\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.760019 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-sg-core-conf-yaml\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.760056 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-combined-ca-bundle\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.760104 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpqr4\" (UniqueName: \"kubernetes.io/projected/e802c6eb-1a02-457f-9abf-daef4328992f-kube-api-access-kpqr4\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.760130 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-scripts\") pod \"e802c6eb-1a02-457f-9abf-daef4328992f\" (UID: \"e802c6eb-1a02-457f-9abf-daef4328992f\") " Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.768904 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.769831 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.799950 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-scripts" (OuterVolumeSpecName: "scripts") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.800472 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e802c6eb-1a02-457f-9abf-daef4328992f-kube-api-access-kpqr4" (OuterVolumeSpecName: "kube-api-access-kpqr4") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "kube-api-access-kpqr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.803528 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.865470 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.865511 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpqr4\" (UniqueName: \"kubernetes.io/projected/e802c6eb-1a02-457f-9abf-daef4328992f-kube-api-access-kpqr4\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.865525 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.865535 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.865545 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e802c6eb-1a02-457f-9abf-daef4328992f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.904609 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.928403 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-config-data" (OuterVolumeSpecName: "config-data") pod "e802c6eb-1a02-457f-9abf-daef4328992f" (UID: "e802c6eb-1a02-457f-9abf-daef4328992f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.966730 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:17 crc kubenswrapper[4821]: I0309 18:44:17.966766 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e802c6eb-1a02-457f-9abf-daef4328992f-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.242526 4821 generic.go:334] "Generic (PLEG): container finished" podID="e802c6eb-1a02-457f-9abf-daef4328992f" containerID="3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9" exitCode=0 Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.242588 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerDied","Data":"3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9"} Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.244213 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e802c6eb-1a02-457f-9abf-daef4328992f","Type":"ContainerDied","Data":"1c7b28e873a61cf09e334a65dd36e21021add3af9b94f295856aab81578efe6f"} Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.242610 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.244239 4821 scope.go:117] "RemoveContainer" containerID="e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.272525 4821 scope.go:117] "RemoveContainer" containerID="5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.287058 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.301437 4821 scope.go:117] "RemoveContainer" containerID="3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.302633 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.318429 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.318762 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="sg-core" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.318778 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="sg-core" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.318786 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="proxy-httpd" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.318792 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="proxy-httpd" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.318805 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-central-agent" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.318811 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-central-agent" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.318827 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-notification-agent" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.318833 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-notification-agent" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.318992 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-notification-agent" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.319005 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="sg-core" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.319015 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="proxy-httpd" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.319021 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" containerName="ceilometer-central-agent" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.320393 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.329078 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.329372 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.339232 4821 scope.go:117] "RemoveContainer" containerID="db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.354795 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.370804 4821 scope.go:117] "RemoveContainer" containerID="e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.371281 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174\": container with ID starting with e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174 not found: ID does not exist" containerID="e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.371310 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174"} err="failed to get container status \"e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174\": rpc error: code = NotFound desc = could not find container \"e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174\": container with ID starting with e8f21705d1444bb69c0acc356e3e5d8ce52f5fd3651af542bfb3bc96b1c21174 not found: ID does not exist" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.371346 4821 scope.go:117] "RemoveContainer" containerID="5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.372600 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0\": container with ID starting with 5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0 not found: ID does not exist" containerID="5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.372629 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0"} err="failed to get container status \"5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0\": rpc error: code = NotFound desc = could not find container \"5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0\": container with ID starting with 5e085a7fdf474fa5fa90b2b4b13104ec162c088b652b1d7d861fcb55ac6eebc0 not found: ID does not exist" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.372650 4821 scope.go:117] "RemoveContainer" containerID="3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.372999 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9\": container with ID starting with 3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9 not found: ID does not exist" containerID="3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.373024 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9"} err="failed to get container status \"3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9\": rpc error: code = NotFound desc = could not find container \"3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9\": container with ID starting with 3ca86c842b860925649741c486e0cfa4651db5f7019b15e543e8fe1cf7cd71f9 not found: ID does not exist" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.373043 4821 scope.go:117] "RemoveContainer" containerID="db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0" Mar 09 18:44:18 crc kubenswrapper[4821]: E0309 18:44:18.373460 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0\": container with ID starting with db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0 not found: ID does not exist" containerID="db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.373483 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0"} err="failed to get container status \"db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0\": rpc error: code = NotFound desc = could not find container \"db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0\": container with ID starting with db9d387b7187d9183a5e42b6ad7e33254096d3aa766fefeffd20cdbdaf3ef7d0 not found: ID does not exist" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473725 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-config-data\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473805 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-scripts\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473841 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-log-httpd\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473889 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfp79\" (UniqueName: \"kubernetes.io/projected/dc3b18ef-cfbc-4922-8591-72fd4283229a-kube-api-access-kfp79\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473914 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473937 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.473957 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-run-httpd\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575115 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfp79\" (UniqueName: \"kubernetes.io/projected/dc3b18ef-cfbc-4922-8591-72fd4283229a-kube-api-access-kfp79\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575160 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575186 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575204 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-run-httpd\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575281 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-config-data\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575305 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-scripts\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575340 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-log-httpd\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.575919 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-run-httpd\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.577383 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-log-httpd\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.581032 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-scripts\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.581728 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.582030 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-config-data\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.583304 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.594587 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfp79\" (UniqueName: \"kubernetes.io/projected/dc3b18ef-cfbc-4922-8591-72fd4283229a-kube-api-access-kfp79\") pod \"ceilometer-0\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:18 crc kubenswrapper[4821]: I0309 18:44:18.647390 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:19 crc kubenswrapper[4821]: I0309 18:44:19.111985 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:19 crc kubenswrapper[4821]: W0309 18:44:19.115879 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3b18ef_cfbc_4922_8591_72fd4283229a.slice/crio-c368e8f717bc47c6b27b52a7d971ca88cb7338619635a0e231515bcce81f3638 WatchSource:0}: Error finding container c368e8f717bc47c6b27b52a7d971ca88cb7338619635a0e231515bcce81f3638: Status 404 returned error can't find the container with id c368e8f717bc47c6b27b52a7d971ca88cb7338619635a0e231515bcce81f3638 Mar 09 18:44:19 crc kubenswrapper[4821]: I0309 18:44:19.255424 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerStarted","Data":"c368e8f717bc47c6b27b52a7d971ca88cb7338619635a0e231515bcce81f3638"} Mar 09 18:44:19 crc kubenswrapper[4821]: I0309 18:44:19.568597 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e802c6eb-1a02-457f-9abf-daef4328992f" path="/var/lib/kubelet/pods/e802c6eb-1a02-457f-9abf-daef4328992f/volumes" Mar 09 18:44:20 crc kubenswrapper[4821]: I0309 18:44:20.278340 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerStarted","Data":"d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534"} Mar 09 18:44:21 crc kubenswrapper[4821]: I0309 18:44:21.289025 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerStarted","Data":"73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c"} Mar 09 18:44:21 crc kubenswrapper[4821]: I0309 18:44:21.289371 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerStarted","Data":"dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383"} Mar 09 18:44:24 crc kubenswrapper[4821]: I0309 18:44:24.341072 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerStarted","Data":"e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5"} Mar 09 18:44:24 crc kubenswrapper[4821]: I0309 18:44:24.341844 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:24 crc kubenswrapper[4821]: I0309 18:44:24.390784 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.640601256 podStartE2EDuration="6.390759197s" podCreationTimestamp="2026-03-09 18:44:18 +0000 UTC" firstStartedPulling="2026-03-09 18:44:19.119407565 +0000 UTC m=+1196.280783451" lastFinishedPulling="2026-03-09 18:44:22.869565536 +0000 UTC m=+1200.030941392" observedRunningTime="2026-03-09 18:44:24.379298866 +0000 UTC m=+1201.540674762" watchObservedRunningTime="2026-03-09 18:44:24.390759197 +0000 UTC m=+1201.552135063" Mar 09 18:44:38 crc kubenswrapper[4821]: I0309 18:44:38.164650 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.532759 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.534582 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.538820 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.539201 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.540469 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-gvfxv" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.541974 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.581279 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp4gf\" (UniqueName: \"kubernetes.io/projected/d9e8597d-8289-4a66-8099-57a9778dfb9a-kube-api-access-kp4gf\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.581387 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.581430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.581585 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config-secret\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.682921 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config-secret\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.683035 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp4gf\" (UniqueName: \"kubernetes.io/projected/d9e8597d-8289-4a66-8099-57a9778dfb9a-kube-api-access-kp4gf\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.683094 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.683142 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.684437 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.694541 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config-secret\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.694563 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.701254 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:42 crc kubenswrapper[4821]: E0309 18:44:42.701891 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-kp4gf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/openstackclient" podUID="d9e8597d-8289-4a66-8099-57a9778dfb9a" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.718807 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.744205 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp4gf\" (UniqueName: \"kubernetes.io/projected/d9e8597d-8289-4a66-8099-57a9778dfb9a-kube-api-access-kp4gf\") pod \"openstackclient\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.753111 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.754760 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.772577 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.787093 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a388f45b-e428-4530-b5cf-71879e545f6e-openstack-config\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.787173 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a388f45b-e428-4530-b5cf-71879e545f6e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.787233 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a388f45b-e428-4530-b5cf-71879e545f6e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.787271 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k87n\" (UniqueName: \"kubernetes.io/projected/a388f45b-e428-4530-b5cf-71879e545f6e-kube-api-access-5k87n\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.888108 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a388f45b-e428-4530-b5cf-71879e545f6e-openstack-config\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.889379 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a388f45b-e428-4530-b5cf-71879e545f6e-openstack-config\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.889463 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a388f45b-e428-4530-b5cf-71879e545f6e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.889557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a388f45b-e428-4530-b5cf-71879e545f6e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.890013 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k87n\" (UniqueName: \"kubernetes.io/projected/a388f45b-e428-4530-b5cf-71879e545f6e-kube-api-access-5k87n\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.892919 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a388f45b-e428-4530-b5cf-71879e545f6e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.893355 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a388f45b-e428-4530-b5cf-71879e545f6e-openstack-config-secret\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:42 crc kubenswrapper[4821]: I0309 18:44:42.908469 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k87n\" (UniqueName: \"kubernetes.io/projected/a388f45b-e428-4530-b5cf-71879e545f6e-kube-api-access-5k87n\") pod \"openstackclient\" (UID: \"a388f45b-e428-4530-b5cf-71879e545f6e\") " pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.104912 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.526744 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.529945 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="d9e8597d-8289-4a66-8099-57a9778dfb9a" podUID="a388f45b-e428-4530-b5cf-71879e545f6e" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.574482 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.578105 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="d9e8597d-8289-4a66-8099-57a9778dfb9a" podUID="a388f45b-e428-4530-b5cf-71879e545f6e" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.606546 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-combined-ca-bundle\") pod \"d9e8597d-8289-4a66-8099-57a9778dfb9a\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.606743 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp4gf\" (UniqueName: \"kubernetes.io/projected/d9e8597d-8289-4a66-8099-57a9778dfb9a-kube-api-access-kp4gf\") pod \"d9e8597d-8289-4a66-8099-57a9778dfb9a\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.606911 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config\") pod \"d9e8597d-8289-4a66-8099-57a9778dfb9a\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.607081 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config-secret\") pod \"d9e8597d-8289-4a66-8099-57a9778dfb9a\" (UID: \"d9e8597d-8289-4a66-8099-57a9778dfb9a\") " Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.609986 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d9e8597d-8289-4a66-8099-57a9778dfb9a" (UID: "d9e8597d-8289-4a66-8099-57a9778dfb9a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.613168 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d9e8597d-8289-4a66-8099-57a9778dfb9a" (UID: "d9e8597d-8289-4a66-8099-57a9778dfb9a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.613206 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9e8597d-8289-4a66-8099-57a9778dfb9a" (UID: "d9e8597d-8289-4a66-8099-57a9778dfb9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.613570 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.614369 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e8597d-8289-4a66-8099-57a9778dfb9a-kube-api-access-kp4gf" (OuterVolumeSpecName: "kube-api-access-kp4gf") pod "d9e8597d-8289-4a66-8099-57a9778dfb9a" (UID: "d9e8597d-8289-4a66-8099-57a9778dfb9a"). InnerVolumeSpecName "kube-api-access-kp4gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.719729 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp4gf\" (UniqueName: \"kubernetes.io/projected/d9e8597d-8289-4a66-8099-57a9778dfb9a-kube-api-access-kp4gf\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.719777 4821 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.719788 4821 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:43 crc kubenswrapper[4821]: I0309 18:44:43.719797 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e8597d-8289-4a66-8099-57a9778dfb9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:44 crc kubenswrapper[4821]: I0309 18:44:44.535932 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"a388f45b-e428-4530-b5cf-71879e545f6e","Type":"ContainerStarted","Data":"6713acb6f116dcc90897f8c878060931598363e0e3b4b01827cf0932da370ae4"} Mar 09 18:44:44 crc kubenswrapper[4821]: I0309 18:44:44.535960 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Mar 09 18:44:44 crc kubenswrapper[4821]: I0309 18:44:44.538940 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="d9e8597d-8289-4a66-8099-57a9778dfb9a" podUID="a388f45b-e428-4530-b5cf-71879e545f6e" Mar 09 18:44:44 crc kubenswrapper[4821]: I0309 18:44:44.548239 4821 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="d9e8597d-8289-4a66-8099-57a9778dfb9a" podUID="a388f45b-e428-4530-b5cf-71879e545f6e" Mar 09 18:44:45 crc kubenswrapper[4821]: I0309 18:44:45.562080 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e8597d-8289-4a66-8099-57a9778dfb9a" path="/var/lib/kubelet/pods/d9e8597d-8289-4a66-8099-57a9778dfb9a/volumes" Mar 09 18:44:46 crc kubenswrapper[4821]: I0309 18:44:46.122824 4821 scope.go:117] "RemoveContainer" containerID="8e7abab222a1b625a74468adff80808e049c74a74fef05c43e8ce2b31ada94d1" Mar 09 18:44:48 crc kubenswrapper[4821]: I0309 18:44:48.653187 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:51 crc kubenswrapper[4821]: I0309 18:44:51.444606 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:44:51 crc kubenswrapper[4821]: I0309 18:44:51.445288 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" containerName="kube-state-metrics" containerID="cri-o://89803e16529ec071895930a757ab9f0a3895a84f30855c2d4a062a921f76a4c4" gracePeriod=30 Mar 09 18:44:51 crc kubenswrapper[4821]: I0309 18:44:51.634225 4821 generic.go:334] "Generic (PLEG): container finished" podID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" containerID="89803e16529ec071895930a757ab9f0a3895a84f30855c2d4a062a921f76a4c4" exitCode=2 Mar 09 18:44:51 crc kubenswrapper[4821]: I0309 18:44:51.634558 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"0efefaf4-58a1-488a-a9ec-703c46ce0c00","Type":"ContainerDied","Data":"89803e16529ec071895930a757ab9f0a3895a84f30855c2d4a062a921f76a4c4"} Mar 09 18:44:52 crc kubenswrapper[4821]: I0309 18:44:52.628703 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:52 crc kubenswrapper[4821]: I0309 18:44:52.628945 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-central-agent" containerID="cri-o://d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534" gracePeriod=30 Mar 09 18:44:52 crc kubenswrapper[4821]: I0309 18:44:52.629292 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="proxy-httpd" containerID="cri-o://e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5" gracePeriod=30 Mar 09 18:44:52 crc kubenswrapper[4821]: I0309 18:44:52.629349 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="sg-core" containerID="cri-o://73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c" gracePeriod=30 Mar 09 18:44:52 crc kubenswrapper[4821]: I0309 18:44:52.629382 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-notification-agent" containerID="cri-o://dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383" gracePeriod=30 Mar 09 18:44:53 crc kubenswrapper[4821]: I0309 18:44:53.652676 4821 generic.go:334] "Generic (PLEG): container finished" podID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerID="e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5" exitCode=0 Mar 09 18:44:53 crc kubenswrapper[4821]: I0309 18:44:53.652995 4821 generic.go:334] "Generic (PLEG): container finished" podID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerID="73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c" exitCode=2 Mar 09 18:44:53 crc kubenswrapper[4821]: I0309 18:44:53.652736 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerDied","Data":"e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5"} Mar 09 18:44:53 crc kubenswrapper[4821]: I0309 18:44:53.653042 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerDied","Data":"73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c"} Mar 09 18:44:53 crc kubenswrapper[4821]: I0309 18:44:53.653059 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerDied","Data":"d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534"} Mar 09 18:44:53 crc kubenswrapper[4821]: I0309 18:44:53.653010 4821 generic.go:334] "Generic (PLEG): container finished" podID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerID="d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534" exitCode=0 Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.367754 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.506426 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz5dc\" (UniqueName: \"kubernetes.io/projected/0efefaf4-58a1-488a-a9ec-703c46ce0c00-kube-api-access-pz5dc\") pod \"0efefaf4-58a1-488a-a9ec-703c46ce0c00\" (UID: \"0efefaf4-58a1-488a-a9ec-703c46ce0c00\") " Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.510837 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efefaf4-58a1-488a-a9ec-703c46ce0c00-kube-api-access-pz5dc" (OuterVolumeSpecName: "kube-api-access-pz5dc") pod "0efefaf4-58a1-488a-a9ec-703c46ce0c00" (UID: "0efefaf4-58a1-488a-a9ec-703c46ce0c00"). InnerVolumeSpecName "kube-api-access-pz5dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.608108 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz5dc\" (UniqueName: \"kubernetes.io/projected/0efefaf4-58a1-488a-a9ec-703c46ce0c00-kube-api-access-pz5dc\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.661256 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"0efefaf4-58a1-488a-a9ec-703c46ce0c00","Type":"ContainerDied","Data":"4f28d7080d3da91a9034b718992c7c2c9b70ab8bbfb155c9881ca054383d293c"} Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.661311 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.661347 4821 scope.go:117] "RemoveContainer" containerID="89803e16529ec071895930a757ab9f0a3895a84f30855c2d4a062a921f76a4c4" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.663384 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"a388f45b-e428-4530-b5cf-71879e545f6e","Type":"ContainerStarted","Data":"7604e52796c7b30c0b66569852822cc691f10784a1943d39393561780f39d990"} Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.692387 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.091678147 podStartE2EDuration="12.692364436s" podCreationTimestamp="2026-03-09 18:44:42 +0000 UTC" firstStartedPulling="2026-03-09 18:44:43.622444277 +0000 UTC m=+1220.783820133" lastFinishedPulling="2026-03-09 18:44:54.223130566 +0000 UTC m=+1231.384506422" observedRunningTime="2026-03-09 18:44:54.679974999 +0000 UTC m=+1231.841350865" watchObservedRunningTime="2026-03-09 18:44:54.692364436 +0000 UTC m=+1231.853740292" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.709832 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.727458 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.736927 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:44:54 crc kubenswrapper[4821]: E0309 18:44:54.737340 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" containerName="kube-state-metrics" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.737359 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" containerName="kube-state-metrics" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.737525 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" containerName="kube-state-metrics" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.738095 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.745843 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.745905 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.746023 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.911899 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.911957 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.912017 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4ml\" (UniqueName: \"kubernetes.io/projected/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-api-access-rx4ml\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:54 crc kubenswrapper[4821]: I0309 18:44:54.912047 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.013708 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.014410 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.014531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx4ml\" (UniqueName: \"kubernetes.io/projected/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-api-access-rx4ml\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.014563 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.018525 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.018668 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.024263 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.032664 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx4ml\" (UniqueName: \"kubernetes.io/projected/c66df9ab-03fb-42fa-b3ef-9f3064523682-kube-api-access-rx4ml\") pod \"kube-state-metrics-0\" (UID: \"c66df9ab-03fb-42fa-b3ef-9f3064523682\") " pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.061540 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:55 crc kubenswrapper[4821]: W0309 18:44:55.550927 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc66df9ab_03fb_42fa_b3ef_9f3064523682.slice/crio-e8794f15af5210bcbb7985bc51655b486414ad6082b0949b3342e47624597e4f WatchSource:0}: Error finding container e8794f15af5210bcbb7985bc51655b486414ad6082b0949b3342e47624597e4f: Status 404 returned error can't find the container with id e8794f15af5210bcbb7985bc51655b486414ad6082b0949b3342e47624597e4f Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.564504 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efefaf4-58a1-488a-a9ec-703c46ce0c00" path="/var/lib/kubelet/pods/0efefaf4-58a1-488a-a9ec-703c46ce0c00/volumes" Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.565361 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Mar 09 18:44:55 crc kubenswrapper[4821]: I0309 18:44:55.672975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"c66df9ab-03fb-42fa-b3ef-9f3064523682","Type":"ContainerStarted","Data":"e8794f15af5210bcbb7985bc51655b486414ad6082b0949b3342e47624597e4f"} Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.098019 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243227 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfp79\" (UniqueName: \"kubernetes.io/projected/dc3b18ef-cfbc-4922-8591-72fd4283229a-kube-api-access-kfp79\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243334 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-scripts\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243373 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-combined-ca-bundle\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243393 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-config-data\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243419 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-log-httpd\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243452 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-run-httpd\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.243478 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-sg-core-conf-yaml\") pod \"dc3b18ef-cfbc-4922-8591-72fd4283229a\" (UID: \"dc3b18ef-cfbc-4922-8591-72fd4283229a\") " Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.244293 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.244444 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.246751 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3b18ef-cfbc-4922-8591-72fd4283229a-kube-api-access-kfp79" (OuterVolumeSpecName: "kube-api-access-kfp79") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "kube-api-access-kfp79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.247155 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-scripts" (OuterVolumeSpecName: "scripts") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.264483 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.306836 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.323505 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-config-data" (OuterVolumeSpecName: "config-data") pod "dc3b18ef-cfbc-4922-8591-72fd4283229a" (UID: "dc3b18ef-cfbc-4922-8591-72fd4283229a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345831 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345868 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345882 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345893 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345905 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc3b18ef-cfbc-4922-8591-72fd4283229a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345916 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc3b18ef-cfbc-4922-8591-72fd4283229a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.345928 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfp79\" (UniqueName: \"kubernetes.io/projected/dc3b18ef-cfbc-4922-8591-72fd4283229a-kube-api-access-kfp79\") on node \"crc\" DevicePath \"\"" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.683773 4821 generic.go:334] "Generic (PLEG): container finished" podID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerID="dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383" exitCode=0 Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.683811 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.683831 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerDied","Data":"dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383"} Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.684286 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"dc3b18ef-cfbc-4922-8591-72fd4283229a","Type":"ContainerDied","Data":"c368e8f717bc47c6b27b52a7d971ca88cb7338619635a0e231515bcce81f3638"} Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.684338 4821 scope.go:117] "RemoveContainer" containerID="e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.685751 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"c66df9ab-03fb-42fa-b3ef-9f3064523682","Type":"ContainerStarted","Data":"9b2d861d029d3fa07908282f59569cd3c8a686ddb0454cde78e3af164f50a2b1"} Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.685905 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.703873 4821 scope.go:117] "RemoveContainer" containerID="73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.706024 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.293254384 podStartE2EDuration="2.706003498s" podCreationTimestamp="2026-03-09 18:44:54 +0000 UTC" firstStartedPulling="2026-03-09 18:44:55.55307614 +0000 UTC m=+1232.714451996" lastFinishedPulling="2026-03-09 18:44:55.965825254 +0000 UTC m=+1233.127201110" observedRunningTime="2026-03-09 18:44:56.704402594 +0000 UTC m=+1233.865778470" watchObservedRunningTime="2026-03-09 18:44:56.706003498 +0000 UTC m=+1233.867379354" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.729394 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.732867 4821 scope.go:117] "RemoveContainer" containerID="dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.745456 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.761381 4821 scope.go:117] "RemoveContainer" containerID="d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775258 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.775674 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="sg-core" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775692 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="sg-core" Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.775727 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-central-agent" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775735 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-central-agent" Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.775746 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="proxy-httpd" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775752 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="proxy-httpd" Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.775763 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-notification-agent" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775769 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-notification-agent" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775976 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-central-agent" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.775994 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="ceilometer-notification-agent" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.776012 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="proxy-httpd" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.776041 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" containerName="sg-core" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.778075 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.780619 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.780987 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.781140 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.781433 4821 scope.go:117] "RemoveContainer" containerID="e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5" Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.781843 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5\": container with ID starting with e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5 not found: ID does not exist" containerID="e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.781894 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5"} err="failed to get container status \"e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5\": rpc error: code = NotFound desc = could not find container \"e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5\": container with ID starting with e2f1180ae24b38abcdcb8cd351e49e2a1c929f1e6eeceaa039fff360e1757aa5 not found: ID does not exist" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.781923 4821 scope.go:117] "RemoveContainer" containerID="73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c" Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.782173 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c\": container with ID starting with 73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c not found: ID does not exist" containerID="73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.782202 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c"} err="failed to get container status \"73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c\": rpc error: code = NotFound desc = could not find container \"73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c\": container with ID starting with 73c0290d9db6931376d8ca305ff85b319bbe234e26005daaf5ce38c24af60a6c not found: ID does not exist" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.782221 4821 scope.go:117] "RemoveContainer" containerID="dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383" Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.782498 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383\": container with ID starting with dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383 not found: ID does not exist" containerID="dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.782615 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383"} err="failed to get container status \"dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383\": rpc error: code = NotFound desc = could not find container \"dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383\": container with ID starting with dadfd2468a023d6ac68bab70e0ce273688cbedec35795f61220263fcc9fd5383 not found: ID does not exist" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.782714 4821 scope.go:117] "RemoveContainer" containerID="d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.782648 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:56 crc kubenswrapper[4821]: E0309 18:44:56.783102 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534\": container with ID starting with d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534 not found: ID does not exist" containerID="d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.783316 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534"} err="failed to get container status \"d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534\": rpc error: code = NotFound desc = could not find container \"d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534\": container with ID starting with d5c16dc7aec19feba6db6ba4370978293e12ddf1fe84e7adc6e3d37f00f0c534 not found: ID does not exist" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.856594 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-log-httpd\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.856820 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.856894 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-run-httpd\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.857014 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms68n\" (UniqueName: \"kubernetes.io/projected/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-kube-api-access-ms68n\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.857168 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-scripts\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.857254 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-config-data\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.857344 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.857424 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.957999 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958052 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-run-httpd\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958074 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms68n\" (UniqueName: \"kubernetes.io/projected/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-kube-api-access-ms68n\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958110 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-scripts\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958135 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-config-data\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958156 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958177 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.958228 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-log-httpd\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.959117 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-run-httpd\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.959298 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-log-httpd\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.962774 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-scripts\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.962940 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.963088 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-config-data\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.963291 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.963549 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:56 crc kubenswrapper[4821]: I0309 18:44:56.979651 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms68n\" (UniqueName: \"kubernetes.io/projected/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-kube-api-access-ms68n\") pod \"ceilometer-0\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:57 crc kubenswrapper[4821]: I0309 18:44:57.097680 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:44:57 crc kubenswrapper[4821]: I0309 18:44:57.563478 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3b18ef-cfbc-4922-8591-72fd4283229a" path="/var/lib/kubelet/pods/dc3b18ef-cfbc-4922-8591-72fd4283229a/volumes" Mar 09 18:44:57 crc kubenswrapper[4821]: W0309 18:44:57.596275 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0e2f834_85c6_4c7f_bbfd_e9da005d7bd8.slice/crio-8f653681d44e253d3b7ffbd6d345677601a96ed95280ef2888acb397610f1612 WatchSource:0}: Error finding container 8f653681d44e253d3b7ffbd6d345677601a96ed95280ef2888acb397610f1612: Status 404 returned error can't find the container with id 8f653681d44e253d3b7ffbd6d345677601a96ed95280ef2888acb397610f1612 Mar 09 18:44:57 crc kubenswrapper[4821]: I0309 18:44:57.597630 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:44:57 crc kubenswrapper[4821]: I0309 18:44:57.716576 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerStarted","Data":"8f653681d44e253d3b7ffbd6d345677601a96ed95280ef2888acb397610f1612"} Mar 09 18:44:58 crc kubenswrapper[4821]: I0309 18:44:58.723989 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerStarted","Data":"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b"} Mar 09 18:44:59 crc kubenswrapper[4821]: I0309 18:44:59.735627 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerStarted","Data":"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c"} Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.149809 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z"] Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.151581 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.154486 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.154543 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.163792 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z"] Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.305150 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6hd\" (UniqueName: \"kubernetes.io/projected/16e14459-01b6-4c39-96e8-9e24d5293791-kube-api-access-fr6hd\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.305255 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16e14459-01b6-4c39-96e8-9e24d5293791-secret-volume\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.305488 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e14459-01b6-4c39-96e8-9e24d5293791-config-volume\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.407202 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr6hd\" (UniqueName: \"kubernetes.io/projected/16e14459-01b6-4c39-96e8-9e24d5293791-kube-api-access-fr6hd\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.407374 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16e14459-01b6-4c39-96e8-9e24d5293791-secret-volume\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.407499 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e14459-01b6-4c39-96e8-9e24d5293791-config-volume\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.408875 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e14459-01b6-4c39-96e8-9e24d5293791-config-volume\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.413053 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16e14459-01b6-4c39-96e8-9e24d5293791-secret-volume\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.434371 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr6hd\" (UniqueName: \"kubernetes.io/projected/16e14459-01b6-4c39-96e8-9e24d5293791-kube-api-access-fr6hd\") pod \"collect-profiles-29551365-4nf9z\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.469183 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.577235 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-hdpct"] Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.578136 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.607335 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hdpct"] Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.613241 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-operator-scripts\") pod \"watcher-db-create-hdpct\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.613433 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnqz\" (UniqueName: \"kubernetes.io/projected/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-kube-api-access-wsnqz\") pod \"watcher-db-create-hdpct\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.695434 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-13c5-account-create-update-8l44r"] Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.697995 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.702606 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.714617 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnqz\" (UniqueName: \"kubernetes.io/projected/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-kube-api-access-wsnqz\") pod \"watcher-db-create-hdpct\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.714688 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5693552-2476-4bcf-a972-e60391565adf-operator-scripts\") pod \"watcher-13c5-account-create-update-8l44r\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.714712 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjj6m\" (UniqueName: \"kubernetes.io/projected/b5693552-2476-4bcf-a972-e60391565adf-kube-api-access-qjj6m\") pod \"watcher-13c5-account-create-update-8l44r\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.714728 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-operator-scripts\") pod \"watcher-db-create-hdpct\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.715364 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-operator-scripts\") pod \"watcher-db-create-hdpct\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.716348 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-13c5-account-create-update-8l44r"] Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.744119 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnqz\" (UniqueName: \"kubernetes.io/projected/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-kube-api-access-wsnqz\") pod \"watcher-db-create-hdpct\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.748216 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerStarted","Data":"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3"} Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.817247 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5693552-2476-4bcf-a972-e60391565adf-operator-scripts\") pod \"watcher-13c5-account-create-update-8l44r\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.817361 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjj6m\" (UniqueName: \"kubernetes.io/projected/b5693552-2476-4bcf-a972-e60391565adf-kube-api-access-qjj6m\") pod \"watcher-13c5-account-create-update-8l44r\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.818256 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5693552-2476-4bcf-a972-e60391565adf-operator-scripts\") pod \"watcher-13c5-account-create-update-8l44r\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.842871 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjj6m\" (UniqueName: \"kubernetes.io/projected/b5693552-2476-4bcf-a972-e60391565adf-kube-api-access-qjj6m\") pod \"watcher-13c5-account-create-update-8l44r\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:00 crc kubenswrapper[4821]: I0309 18:45:00.913596 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.014485 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z"] Mar 09 18:45:01 crc kubenswrapper[4821]: W0309 18:45:01.039479 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16e14459_01b6_4c39_96e8_9e24d5293791.slice/crio-d619b6e7db2d278b862f8da302ccce0c29ad98cd5b184a77f7bb81dea3517f3c WatchSource:0}: Error finding container d619b6e7db2d278b862f8da302ccce0c29ad98cd5b184a77f7bb81dea3517f3c: Status 404 returned error can't find the container with id d619b6e7db2d278b862f8da302ccce0c29ad98cd5b184a77f7bb81dea3517f3c Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.040101 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.256692 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hdpct"] Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.684898 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-13c5-account-create-update-8l44r"] Mar 09 18:45:01 crc kubenswrapper[4821]: W0309 18:45:01.702594 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5693552_2476_4bcf_a972_e60391565adf.slice/crio-35e5b1eadb32aa4ba993d0858041de7cf1261557b3d69652da61f323f8ec3b86 WatchSource:0}: Error finding container 35e5b1eadb32aa4ba993d0858041de7cf1261557b3d69652da61f323f8ec3b86: Status 404 returned error can't find the container with id 35e5b1eadb32aa4ba993d0858041de7cf1261557b3d69652da61f323f8ec3b86 Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.761076 4821 generic.go:334] "Generic (PLEG): container finished" podID="16e14459-01b6-4c39-96e8-9e24d5293791" containerID="ae2a5f1c24b932e0e6a1fb3830cff014a4cea704393efde1fc969378021463b2" exitCode=0 Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.761175 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" event={"ID":"16e14459-01b6-4c39-96e8-9e24d5293791","Type":"ContainerDied","Data":"ae2a5f1c24b932e0e6a1fb3830cff014a4cea704393efde1fc969378021463b2"} Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.761227 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" event={"ID":"16e14459-01b6-4c39-96e8-9e24d5293791","Type":"ContainerStarted","Data":"d619b6e7db2d278b862f8da302ccce0c29ad98cd5b184a77f7bb81dea3517f3c"} Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.763312 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hdpct" event={"ID":"371d60af-a86d-4bc8-a4a4-e0e97b6620ad","Type":"ContainerStarted","Data":"87285f5f18854effb3df35aa17e969de672d6ba8399d5e89ad78339702f555f6"} Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.763371 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hdpct" event={"ID":"371d60af-a86d-4bc8-a4a4-e0e97b6620ad","Type":"ContainerStarted","Data":"a212ca28a62d2f3b453fbe7a6e74a1eef4376fb3ed642c655ec3e0eba9b844e0"} Mar 09 18:45:01 crc kubenswrapper[4821]: I0309 18:45:01.768901 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" event={"ID":"b5693552-2476-4bcf-a972-e60391565adf","Type":"ContainerStarted","Data":"35e5b1eadb32aa4ba993d0858041de7cf1261557b3d69652da61f323f8ec3b86"} Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.780621 4821 generic.go:334] "Generic (PLEG): container finished" podID="b5693552-2476-4bcf-a972-e60391565adf" containerID="b1a2388f585301116925028424683876bec66ab44315d2e0630e6de88271437b" exitCode=0 Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.780745 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" event={"ID":"b5693552-2476-4bcf-a972-e60391565adf","Type":"ContainerDied","Data":"b1a2388f585301116925028424683876bec66ab44315d2e0630e6de88271437b"} Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.787847 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hdpct" event={"ID":"371d60af-a86d-4bc8-a4a4-e0e97b6620ad","Type":"ContainerDied","Data":"87285f5f18854effb3df35aa17e969de672d6ba8399d5e89ad78339702f555f6"} Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.787646 4821 generic.go:334] "Generic (PLEG): container finished" podID="371d60af-a86d-4bc8-a4a4-e0e97b6620ad" containerID="87285f5f18854effb3df35aa17e969de672d6ba8399d5e89ad78339702f555f6" exitCode=0 Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.792315 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerStarted","Data":"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba"} Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.792674 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:02 crc kubenswrapper[4821]: I0309 18:45:02.847490 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.877773144 podStartE2EDuration="6.847471752s" podCreationTimestamp="2026-03-09 18:44:56 +0000 UTC" firstStartedPulling="2026-03-09 18:44:57.598799497 +0000 UTC m=+1234.760175353" lastFinishedPulling="2026-03-09 18:45:01.568498105 +0000 UTC m=+1238.729873961" observedRunningTime="2026-03-09 18:45:02.841708615 +0000 UTC m=+1240.003084471" watchObservedRunningTime="2026-03-09 18:45:02.847471752 +0000 UTC m=+1240.008847608" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.286431 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.292172 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.465098 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-operator-scripts\") pod \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.465219 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e14459-01b6-4c39-96e8-9e24d5293791-config-volume\") pod \"16e14459-01b6-4c39-96e8-9e24d5293791\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.465299 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr6hd\" (UniqueName: \"kubernetes.io/projected/16e14459-01b6-4c39-96e8-9e24d5293791-kube-api-access-fr6hd\") pod \"16e14459-01b6-4c39-96e8-9e24d5293791\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.465508 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsnqz\" (UniqueName: \"kubernetes.io/projected/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-kube-api-access-wsnqz\") pod \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\" (UID: \"371d60af-a86d-4bc8-a4a4-e0e97b6620ad\") " Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.465552 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16e14459-01b6-4c39-96e8-9e24d5293791-secret-volume\") pod \"16e14459-01b6-4c39-96e8-9e24d5293791\" (UID: \"16e14459-01b6-4c39-96e8-9e24d5293791\") " Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.466219 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e14459-01b6-4c39-96e8-9e24d5293791-config-volume" (OuterVolumeSpecName: "config-volume") pod "16e14459-01b6-4c39-96e8-9e24d5293791" (UID: "16e14459-01b6-4c39-96e8-9e24d5293791"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.466888 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "371d60af-a86d-4bc8-a4a4-e0e97b6620ad" (UID: "371d60af-a86d-4bc8-a4a4-e0e97b6620ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.472470 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-kube-api-access-wsnqz" (OuterVolumeSpecName: "kube-api-access-wsnqz") pod "371d60af-a86d-4bc8-a4a4-e0e97b6620ad" (UID: "371d60af-a86d-4bc8-a4a4-e0e97b6620ad"). InnerVolumeSpecName "kube-api-access-wsnqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.472593 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e14459-01b6-4c39-96e8-9e24d5293791-kube-api-access-fr6hd" (OuterVolumeSpecName: "kube-api-access-fr6hd") pod "16e14459-01b6-4c39-96e8-9e24d5293791" (UID: "16e14459-01b6-4c39-96e8-9e24d5293791"). InnerVolumeSpecName "kube-api-access-fr6hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.473434 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e14459-01b6-4c39-96e8-9e24d5293791-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16e14459-01b6-4c39-96e8-9e24d5293791" (UID: "16e14459-01b6-4c39-96e8-9e24d5293791"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.568041 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr6hd\" (UniqueName: \"kubernetes.io/projected/16e14459-01b6-4c39-96e8-9e24d5293791-kube-api-access-fr6hd\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.568074 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsnqz\" (UniqueName: \"kubernetes.io/projected/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-kube-api-access-wsnqz\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.568090 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16e14459-01b6-4c39-96e8-9e24d5293791-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.568103 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371d60af-a86d-4bc8-a4a4-e0e97b6620ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.568118 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16e14459-01b6-4c39-96e8-9e24d5293791-config-volume\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.800872 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.801668 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551365-4nf9z" event={"ID":"16e14459-01b6-4c39-96e8-9e24d5293791","Type":"ContainerDied","Data":"d619b6e7db2d278b862f8da302ccce0c29ad98cd5b184a77f7bb81dea3517f3c"} Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.801694 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d619b6e7db2d278b862f8da302ccce0c29ad98cd5b184a77f7bb81dea3517f3c" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.803500 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hdpct" Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.804054 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hdpct" event={"ID":"371d60af-a86d-4bc8-a4a4-e0e97b6620ad","Type":"ContainerDied","Data":"a212ca28a62d2f3b453fbe7a6e74a1eef4376fb3ed642c655ec3e0eba9b844e0"} Mar 09 18:45:03 crc kubenswrapper[4821]: I0309 18:45:03.804080 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a212ca28a62d2f3b453fbe7a6e74a1eef4376fb3ed642c655ec3e0eba9b844e0" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.044661 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.177852 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5693552-2476-4bcf-a972-e60391565adf-operator-scripts\") pod \"b5693552-2476-4bcf-a972-e60391565adf\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.177970 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjj6m\" (UniqueName: \"kubernetes.io/projected/b5693552-2476-4bcf-a972-e60391565adf-kube-api-access-qjj6m\") pod \"b5693552-2476-4bcf-a972-e60391565adf\" (UID: \"b5693552-2476-4bcf-a972-e60391565adf\") " Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.179281 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5693552-2476-4bcf-a972-e60391565adf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5693552-2476-4bcf-a972-e60391565adf" (UID: "b5693552-2476-4bcf-a972-e60391565adf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.189626 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5693552-2476-4bcf-a972-e60391565adf-kube-api-access-qjj6m" (OuterVolumeSpecName: "kube-api-access-qjj6m") pod "b5693552-2476-4bcf-a972-e60391565adf" (UID: "b5693552-2476-4bcf-a972-e60391565adf"). InnerVolumeSpecName "kube-api-access-qjj6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.279727 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5693552-2476-4bcf-a972-e60391565adf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.279761 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjj6m\" (UniqueName: \"kubernetes.io/projected/b5693552-2476-4bcf-a972-e60391565adf-kube-api-access-qjj6m\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.815174 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" event={"ID":"b5693552-2476-4bcf-a972-e60391565adf","Type":"ContainerDied","Data":"35e5b1eadb32aa4ba993d0858041de7cf1261557b3d69652da61f323f8ec3b86"} Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.815220 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e5b1eadb32aa4ba993d0858041de7cf1261557b3d69652da61f323f8ec3b86" Mar 09 18:45:04 crc kubenswrapper[4821]: I0309 18:45:04.815230 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-13c5-account-create-update-8l44r" Mar 09 18:45:05 crc kubenswrapper[4821]: I0309 18:45:05.072927 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.040309 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-whc2t"] Mar 09 18:45:06 crc kubenswrapper[4821]: E0309 18:45:06.040709 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e14459-01b6-4c39-96e8-9e24d5293791" containerName="collect-profiles" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.040723 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e14459-01b6-4c39-96e8-9e24d5293791" containerName="collect-profiles" Mar 09 18:45:06 crc kubenswrapper[4821]: E0309 18:45:06.040750 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5693552-2476-4bcf-a972-e60391565adf" containerName="mariadb-account-create-update" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.040758 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5693552-2476-4bcf-a972-e60391565adf" containerName="mariadb-account-create-update" Mar 09 18:45:06 crc kubenswrapper[4821]: E0309 18:45:06.040776 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371d60af-a86d-4bc8-a4a4-e0e97b6620ad" containerName="mariadb-database-create" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.040784 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="371d60af-a86d-4bc8-a4a4-e0e97b6620ad" containerName="mariadb-database-create" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.040979 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5693552-2476-4bcf-a972-e60391565adf" containerName="mariadb-account-create-update" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.040996 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e14459-01b6-4c39-96e8-9e24d5293791" containerName="collect-profiles" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.041007 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="371d60af-a86d-4bc8-a4a4-e0e97b6620ad" containerName="mariadb-database-create" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.041645 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.046103 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d9nsp" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.046170 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.053007 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-whc2t"] Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.118644 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9thpt\" (UniqueName: \"kubernetes.io/projected/785fc44b-c186-4374-8023-229ca8f897d1-kube-api-access-9thpt\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.118715 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.118741 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-config-data\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.118771 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.220398 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.220480 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-config-data\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.220562 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.220687 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9thpt\" (UniqueName: \"kubernetes.io/projected/785fc44b-c186-4374-8023-229ca8f897d1-kube-api-access-9thpt\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.225907 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-config-data\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.233856 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.234204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.245724 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9thpt\" (UniqueName: \"kubernetes.io/projected/785fc44b-c186-4374-8023-229ca8f897d1-kube-api-access-9thpt\") pod \"watcher-kuttl-db-sync-whc2t\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.362977 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.807653 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-whc2t"] Mar 09 18:45:06 crc kubenswrapper[4821]: I0309 18:45:06.857336 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" event={"ID":"785fc44b-c186-4374-8023-229ca8f897d1","Type":"ContainerStarted","Data":"f02c03ed1a5317be08fbf8ee9610193f9dc2b125312435ca8cf2cf60be8c3ea3"} Mar 09 18:45:23 crc kubenswrapper[4821]: E0309 18:45:23.484193 4821 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Mar 09 18:45:23 crc kubenswrapper[4821]: E0309 18:45:23.484738 4821 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Mar 09 18:45:23 crc kubenswrapper[4821]: E0309 18:45:23.484873 4821 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9thpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-whc2t_watcher-kuttl-default(785fc44b-c186-4374-8023-229ca8f897d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 18:45:23 crc kubenswrapper[4821]: E0309 18:45:23.486263 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" podUID="785fc44b-c186-4374-8023-229ca8f897d1" Mar 09 18:45:24 crc kubenswrapper[4821]: E0309 18:45:24.000417 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" podUID="785fc44b-c186-4374-8023-229ca8f897d1" Mar 09 18:45:27 crc kubenswrapper[4821]: I0309 18:45:27.110717 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:40 crc kubenswrapper[4821]: I0309 18:45:40.122278 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" event={"ID":"785fc44b-c186-4374-8023-229ca8f897d1","Type":"ContainerStarted","Data":"fb7fe433ccab648dc88048674a31b26f7330cc848c0c63044a54bece83339fa6"} Mar 09 18:45:40 crc kubenswrapper[4821]: I0309 18:45:40.146178 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" podStartSLOduration=1.351799 podStartE2EDuration="34.146157389s" podCreationTimestamp="2026-03-09 18:45:06 +0000 UTC" firstStartedPulling="2026-03-09 18:45:06.840299099 +0000 UTC m=+1244.001674955" lastFinishedPulling="2026-03-09 18:45:39.634657488 +0000 UTC m=+1276.796033344" observedRunningTime="2026-03-09 18:45:40.141554743 +0000 UTC m=+1277.302930629" watchObservedRunningTime="2026-03-09 18:45:40.146157389 +0000 UTC m=+1277.307533245" Mar 09 18:45:43 crc kubenswrapper[4821]: I0309 18:45:43.147420 4821 generic.go:334] "Generic (PLEG): container finished" podID="785fc44b-c186-4374-8023-229ca8f897d1" containerID="fb7fe433ccab648dc88048674a31b26f7330cc848c0c63044a54bece83339fa6" exitCode=0 Mar 09 18:45:43 crc kubenswrapper[4821]: I0309 18:45:43.147513 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" event={"ID":"785fc44b-c186-4374-8023-229ca8f897d1","Type":"ContainerDied","Data":"fb7fe433ccab648dc88048674a31b26f7330cc848c0c63044a54bece83339fa6"} Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.412597 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.460097 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-config-data\") pod \"785fc44b-c186-4374-8023-229ca8f897d1\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.460308 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9thpt\" (UniqueName: \"kubernetes.io/projected/785fc44b-c186-4374-8023-229ca8f897d1-kube-api-access-9thpt\") pod \"785fc44b-c186-4374-8023-229ca8f897d1\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.460599 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-db-sync-config-data\") pod \"785fc44b-c186-4374-8023-229ca8f897d1\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.460746 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-combined-ca-bundle\") pod \"785fc44b-c186-4374-8023-229ca8f897d1\" (UID: \"785fc44b-c186-4374-8023-229ca8f897d1\") " Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.465632 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "785fc44b-c186-4374-8023-229ca8f897d1" (UID: "785fc44b-c186-4374-8023-229ca8f897d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.473943 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785fc44b-c186-4374-8023-229ca8f897d1-kube-api-access-9thpt" (OuterVolumeSpecName: "kube-api-access-9thpt") pod "785fc44b-c186-4374-8023-229ca8f897d1" (UID: "785fc44b-c186-4374-8023-229ca8f897d1"). InnerVolumeSpecName "kube-api-access-9thpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.492876 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "785fc44b-c186-4374-8023-229ca8f897d1" (UID: "785fc44b-c186-4374-8023-229ca8f897d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.533558 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-config-data" (OuterVolumeSpecName: "config-data") pod "785fc44b-c186-4374-8023-229ca8f897d1" (UID: "785fc44b-c186-4374-8023-229ca8f897d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.562393 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9thpt\" (UniqueName: \"kubernetes.io/projected/785fc44b-c186-4374-8023-229ca8f897d1-kube-api-access-9thpt\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.562450 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.562462 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:44 crc kubenswrapper[4821]: I0309 18:45:44.562477 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/785fc44b-c186-4374-8023-229ca8f897d1-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.167011 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" event={"ID":"785fc44b-c186-4374-8023-229ca8f897d1","Type":"ContainerDied","Data":"f02c03ed1a5317be08fbf8ee9610193f9dc2b125312435ca8cf2cf60be8c3ea3"} Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.167428 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f02c03ed1a5317be08fbf8ee9610193f9dc2b125312435ca8cf2cf60be8c3ea3" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.167128 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-whc2t" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.524721 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:45:45 crc kubenswrapper[4821]: E0309 18:45:45.525808 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785fc44b-c186-4374-8023-229ca8f897d1" containerName="watcher-kuttl-db-sync" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.525917 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="785fc44b-c186-4374-8023-229ca8f897d1" containerName="watcher-kuttl-db-sync" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.526201 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="785fc44b-c186-4374-8023-229ca8f897d1" containerName="watcher-kuttl-db-sync" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.527440 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.530251 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-d9nsp" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.530317 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.532778 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.534353 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.538036 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.540642 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.548284 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578457 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578506 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608af3f1-6a88-434c-add7-2fe7aa96974b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578549 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578578 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578597 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8jw\" (UniqueName: \"kubernetes.io/projected/608af3f1-6a88-434c-add7-2fe7aa96974b-kube-api-access-js8jw\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578614 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65318c1d-df52-4dcf-873f-a76c7edcdeae-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578659 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578684 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbx7\" (UniqueName: \"kubernetes.io/projected/65318c1d-df52-4dcf-873f-a76c7edcdeae-kube-api-access-8qbx7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.578737 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.657866 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.669920 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.686916 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696174 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696253 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696304 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608af3f1-6a88-434c-add7-2fe7aa96974b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696367 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfv4\" (UniqueName: \"kubernetes.io/projected/e9439e51-042e-4604-9368-b6e229dd141e-kube-api-access-9wfv4\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696477 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696508 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8jw\" (UniqueName: \"kubernetes.io/projected/608af3f1-6a88-434c-add7-2fe7aa96974b-kube-api-access-js8jw\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696535 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65318c1d-df52-4dcf-873f-a76c7edcdeae-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696690 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbx7\" (UniqueName: \"kubernetes.io/projected/65318c1d-df52-4dcf-873f-a76c7edcdeae-kube-api-access-8qbx7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696793 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696831 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9439e51-042e-4604-9368-b6e229dd141e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.696874 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.700438 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608af3f1-6a88-434c-add7-2fe7aa96974b-logs\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.704377 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65318c1d-df52-4dcf-873f-a76c7edcdeae-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.707178 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.720016 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.720186 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.720014 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.723279 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.727178 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.729407 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8jw\" (UniqueName: \"kubernetes.io/projected/608af3f1-6a88-434c-add7-2fe7aa96974b-kube-api-access-js8jw\") pod \"watcher-kuttl-api-0\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.729567 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.734921 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbx7\" (UniqueName: \"kubernetes.io/projected/65318c1d-df52-4dcf-873f-a76c7edcdeae-kube-api-access-8qbx7\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.798152 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.798210 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfv4\" (UniqueName: \"kubernetes.io/projected/e9439e51-042e-4604-9368-b6e229dd141e-kube-api-access-9wfv4\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.798290 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.798313 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9439e51-042e-4604-9368-b6e229dd141e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.798955 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9439e51-042e-4604-9368-b6e229dd141e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.802253 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.803867 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.820688 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfv4\" (UniqueName: \"kubernetes.io/projected/e9439e51-042e-4604-9368-b6e229dd141e-kube-api-access-9wfv4\") pod \"watcher-kuttl-applier-0\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.860880 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.899633 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:45 crc kubenswrapper[4821]: I0309 18:45:45.915586 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:46 crc kubenswrapper[4821]: I0309 18:45:46.516516 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:45:46 crc kubenswrapper[4821]: I0309 18:45:46.611664 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:45:46 crc kubenswrapper[4821]: W0309 18:45:46.616254 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65318c1d_df52_4dcf_873f_a76c7edcdeae.slice/crio-0b9ec0f60bacb35eb3b8e0e9a96d8425a27f5f5d4b945b0cafc63ae1d15afe1b WatchSource:0}: Error finding container 0b9ec0f60bacb35eb3b8e0e9a96d8425a27f5f5d4b945b0cafc63ae1d15afe1b: Status 404 returned error can't find the container with id 0b9ec0f60bacb35eb3b8e0e9a96d8425a27f5f5d4b945b0cafc63ae1d15afe1b Mar 09 18:45:46 crc kubenswrapper[4821]: I0309 18:45:46.664560 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:45:46 crc kubenswrapper[4821]: W0309 18:45:46.676634 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod608af3f1_6a88_434c_add7_2fe7aa96974b.slice/crio-dfea752d18658e755e02cc9f3af4758be64b1e061161b4be03634ae31fd1ea4f WatchSource:0}: Error finding container dfea752d18658e755e02cc9f3af4758be64b1e061161b4be03634ae31fd1ea4f: Status 404 returned error can't find the container with id dfea752d18658e755e02cc9f3af4758be64b1e061161b4be03634ae31fd1ea4f Mar 09 18:45:47 crc kubenswrapper[4821]: I0309 18:45:47.208672 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e9439e51-042e-4604-9368-b6e229dd141e","Type":"ContainerStarted","Data":"6ae38b7d13a2cf1b0ab36722bad81c19701f3991fc815cd5639feaba8d8dcaa5"} Mar 09 18:45:47 crc kubenswrapper[4821]: I0309 18:45:47.210177 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"608af3f1-6a88-434c-add7-2fe7aa96974b","Type":"ContainerStarted","Data":"3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373"} Mar 09 18:45:47 crc kubenswrapper[4821]: I0309 18:45:47.210223 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"608af3f1-6a88-434c-add7-2fe7aa96974b","Type":"ContainerStarted","Data":"dfea752d18658e755e02cc9f3af4758be64b1e061161b4be03634ae31fd1ea4f"} Mar 09 18:45:47 crc kubenswrapper[4821]: I0309 18:45:47.210950 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"65318c1d-df52-4dcf-873f-a76c7edcdeae","Type":"ContainerStarted","Data":"0b9ec0f60bacb35eb3b8e0e9a96d8425a27f5f5d4b945b0cafc63ae1d15afe1b"} Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.223025 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"608af3f1-6a88-434c-add7-2fe7aa96974b","Type":"ContainerStarted","Data":"be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633"} Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.223428 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.226597 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"65318c1d-df52-4dcf-873f-a76c7edcdeae","Type":"ContainerStarted","Data":"5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6"} Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.230962 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e9439e51-042e-4604-9368-b6e229dd141e","Type":"ContainerStarted","Data":"4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3"} Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.248261 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.248239402 podStartE2EDuration="3.248239402s" podCreationTimestamp="2026-03-09 18:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:45:48.245709594 +0000 UTC m=+1285.407085450" watchObservedRunningTime="2026-03-09 18:45:48.248239402 +0000 UTC m=+1285.409615258" Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.271410 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.327578517 podStartE2EDuration="3.271391172s" podCreationTimestamp="2026-03-09 18:45:45 +0000 UTC" firstStartedPulling="2026-03-09 18:45:46.524751667 +0000 UTC m=+1283.686127523" lastFinishedPulling="2026-03-09 18:45:47.468564322 +0000 UTC m=+1284.629940178" observedRunningTime="2026-03-09 18:45:48.265678707 +0000 UTC m=+1285.427054573" watchObservedRunningTime="2026-03-09 18:45:48.271391172 +0000 UTC m=+1285.432767028" Mar 09 18:45:48 crc kubenswrapper[4821]: I0309 18:45:48.293386 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.42255894 podStartE2EDuration="3.29335932s" podCreationTimestamp="2026-03-09 18:45:45 +0000 UTC" firstStartedPulling="2026-03-09 18:45:46.619212877 +0000 UTC m=+1283.780588733" lastFinishedPulling="2026-03-09 18:45:47.490013227 +0000 UTC m=+1284.651389113" observedRunningTime="2026-03-09 18:45:48.287700696 +0000 UTC m=+1285.449076562" watchObservedRunningTime="2026-03-09 18:45:48.29335932 +0000 UTC m=+1285.454735186" Mar 09 18:45:50 crc kubenswrapper[4821]: I0309 18:45:50.771890 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:50 crc kubenswrapper[4821]: I0309 18:45:50.861493 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:50 crc kubenswrapper[4821]: I0309 18:45:50.900127 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:55 crc kubenswrapper[4821]: I0309 18:45:55.861620 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:55 crc kubenswrapper[4821]: I0309 18:45:55.887388 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:55 crc kubenswrapper[4821]: I0309 18:45:55.900596 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:55 crc kubenswrapper[4821]: I0309 18:45:55.911746 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:55 crc kubenswrapper[4821]: I0309 18:45:55.917214 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:55 crc kubenswrapper[4821]: I0309 18:45:55.949770 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:56 crc kubenswrapper[4821]: I0309 18:45:56.307763 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:56 crc kubenswrapper[4821]: I0309 18:45:56.312975 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:45:56 crc kubenswrapper[4821]: I0309 18:45:56.339614 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:45:56 crc kubenswrapper[4821]: I0309 18:45:56.343182 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:45:58 crc kubenswrapper[4821]: I0309 18:45:58.502999 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:45:58 crc kubenswrapper[4821]: I0309 18:45:58.503554 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-central-agent" containerID="cri-o://84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" gracePeriod=30 Mar 09 18:45:58 crc kubenswrapper[4821]: I0309 18:45:58.503648 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-notification-agent" containerID="cri-o://9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" gracePeriod=30 Mar 09 18:45:58 crc kubenswrapper[4821]: I0309 18:45:58.503644 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="sg-core" containerID="cri-o://552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" gracePeriod=30 Mar 09 18:45:58 crc kubenswrapper[4821]: I0309 18:45:58.503771 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="proxy-httpd" containerID="cri-o://6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" gracePeriod=30 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.324673 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332666 4821 generic.go:334] "Generic (PLEG): container finished" podID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" exitCode=0 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332692 4821 generic.go:334] "Generic (PLEG): container finished" podID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" exitCode=2 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332702 4821 generic.go:334] "Generic (PLEG): container finished" podID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" exitCode=0 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332709 4821 generic.go:334] "Generic (PLEG): container finished" podID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" exitCode=0 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332729 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerDied","Data":"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba"} Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332752 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerDied","Data":"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3"} Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332761 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerDied","Data":"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c"} Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332770 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerDied","Data":"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b"} Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332779 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8","Type":"ContainerDied","Data":"8f653681d44e253d3b7ffbd6d345677601a96ed95280ef2888acb397610f1612"} Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332793 4821 scope.go:117] "RemoveContainer" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.332891 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.335803 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms68n\" (UniqueName: \"kubernetes.io/projected/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-kube-api-access-ms68n\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.335831 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-ceilometer-tls-certs\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.335932 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-run-httpd\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.335981 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-scripts\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.336012 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-config-data\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.336082 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-combined-ca-bundle\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.336104 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-log-httpd\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.336136 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-sg-core-conf-yaml\") pod \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\" (UID: \"d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8\") " Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.337735 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.338023 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.355501 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-scripts" (OuterVolumeSpecName: "scripts") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.372883 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-kube-api-access-ms68n" (OuterVolumeSpecName: "kube-api-access-ms68n") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "kube-api-access-ms68n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.378480 4821 scope.go:117] "RemoveContainer" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.398422 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.408063 4821 scope.go:117] "RemoveContainer" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.429665 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.442369 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.442627 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.442701 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.442771 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms68n\" (UniqueName: \"kubernetes.io/projected/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-kube-api-access-ms68n\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.442839 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.442909 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.446648 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.451888 4821 scope.go:117] "RemoveContainer" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.463943 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-config-data" (OuterVolumeSpecName: "config-data") pod "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" (UID: "d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.479811 4821 scope.go:117] "RemoveContainer" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.480473 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": container with ID starting with 6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba not found: ID does not exist" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.480518 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba"} err="failed to get container status \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": rpc error: code = NotFound desc = could not find container \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": container with ID starting with 6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.480543 4821 scope.go:117] "RemoveContainer" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.482458 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": container with ID starting with 552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3 not found: ID does not exist" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.482501 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3"} err="failed to get container status \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": rpc error: code = NotFound desc = could not find container \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": container with ID starting with 552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3 not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.482515 4821 scope.go:117] "RemoveContainer" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.484005 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": container with ID starting with 9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c not found: ID does not exist" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.484028 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c"} err="failed to get container status \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": rpc error: code = NotFound desc = could not find container \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": container with ID starting with 9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.484042 4821 scope.go:117] "RemoveContainer" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.489287 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": container with ID starting with 84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b not found: ID does not exist" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489311 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b"} err="failed to get container status \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": rpc error: code = NotFound desc = could not find container \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": container with ID starting with 84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489337 4821 scope.go:117] "RemoveContainer" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489556 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba"} err="failed to get container status \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": rpc error: code = NotFound desc = could not find container \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": container with ID starting with 6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489575 4821 scope.go:117] "RemoveContainer" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489746 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3"} err="failed to get container status \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": rpc error: code = NotFound desc = could not find container \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": container with ID starting with 552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3 not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489763 4821 scope.go:117] "RemoveContainer" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489965 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c"} err="failed to get container status \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": rpc error: code = NotFound desc = could not find container \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": container with ID starting with 9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.489983 4821 scope.go:117] "RemoveContainer" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490163 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b"} err="failed to get container status \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": rpc error: code = NotFound desc = could not find container \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": container with ID starting with 84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490181 4821 scope.go:117] "RemoveContainer" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490372 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba"} err="failed to get container status \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": rpc error: code = NotFound desc = could not find container \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": container with ID starting with 6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490390 4821 scope.go:117] "RemoveContainer" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490562 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3"} err="failed to get container status \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": rpc error: code = NotFound desc = could not find container \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": container with ID starting with 552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3 not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490579 4821 scope.go:117] "RemoveContainer" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490746 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c"} err="failed to get container status \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": rpc error: code = NotFound desc = could not find container \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": container with ID starting with 9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490762 4821 scope.go:117] "RemoveContainer" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490946 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b"} err="failed to get container status \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": rpc error: code = NotFound desc = could not find container \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": container with ID starting with 84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.490968 4821 scope.go:117] "RemoveContainer" containerID="6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491141 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba"} err="failed to get container status \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": rpc error: code = NotFound desc = could not find container \"6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba\": container with ID starting with 6f0de1112a3d787bd3902c8e7814f07ba36819754e114173df93280c7dc08bba not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491156 4821 scope.go:117] "RemoveContainer" containerID="552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491329 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3"} err="failed to get container status \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": rpc error: code = NotFound desc = could not find container \"552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3\": container with ID starting with 552b0af6ec9a6106367f73017b956b422bde5197c1691b478f811de46bb2a8b3 not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491345 4821 scope.go:117] "RemoveContainer" containerID="9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491562 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c"} err="failed to get container status \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": rpc error: code = NotFound desc = could not find container \"9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c\": container with ID starting with 9b6df8fac2d19b0ac95718adb4df4a9516a2204484b7e27597fbb846413c454c not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491580 4821 scope.go:117] "RemoveContainer" containerID="84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.491774 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b"} err="failed to get container status \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": rpc error: code = NotFound desc = could not find container \"84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b\": container with ID starting with 84c5d1969a5aee73e1b1c62812383f6c07d5e3a82e2a9c7cde62ff7ae027e31b not found: ID does not exist" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.544528 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.544561 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.706825 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.720144 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.732044 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.732473 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e9439e51-042e-4604-9368-b6e229dd141e" containerName="watcher-applier" containerID="cri-o://4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3" gracePeriod=30 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.750682 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.751009 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-central-agent" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751025 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-central-agent" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.751048 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="sg-core" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751055 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="sg-core" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.751062 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-notification-agent" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751069 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-notification-agent" Mar 09 18:45:59 crc kubenswrapper[4821]: E0309 18:45:59.751085 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="proxy-httpd" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751091 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="proxy-httpd" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751223 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-notification-agent" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751239 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="ceilometer-central-agent" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751246 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="proxy-httpd" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.751254 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" containerName="sg-core" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.752717 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.754568 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.754950 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.755370 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.760240 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.760440 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="65318c1d-df52-4dcf-873f-a76c7edcdeae" containerName="watcher-decision-engine" containerID="cri-o://5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6" gracePeriod=30 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.768586 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.768823 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-kuttl-api-log" containerID="cri-o://3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373" gracePeriod=30 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.768956 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-api" containerID="cri-o://be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633" gracePeriod=30 Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.774603 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.848861 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-config-data\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849097 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-run-httpd\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849147 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849261 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-log-httpd\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849307 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-scripts\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849353 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptd5\" (UniqueName: \"kubernetes.io/projected/95399cf0-2abf-4b19-9106-7f1489de365d-kube-api-access-wptd5\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849419 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.849460 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950305 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-log-httpd\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950581 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-scripts\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950613 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptd5\" (UniqueName: \"kubernetes.io/projected/95399cf0-2abf-4b19-9106-7f1489de365d-kube-api-access-wptd5\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950634 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950674 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950757 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-config-data\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950822 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-run-httpd\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.950844 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.951064 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-log-httpd\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.951364 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-run-httpd\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.955528 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.955710 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.956368 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-config-data\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.959036 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-scripts\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.959780 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:45:59 crc kubenswrapper[4821]: I0309 18:45:59.971800 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptd5\" (UniqueName: \"kubernetes.io/projected/95399cf0-2abf-4b19-9106-7f1489de365d-kube-api-access-wptd5\") pod \"ceilometer-0\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.082253 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.160880 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551366-x9zpn"] Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.162069 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.171533 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.171561 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.172618 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.182352 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551366-x9zpn"] Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.257273 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljlnf\" (UniqueName: \"kubernetes.io/projected/3153d57a-d24a-493f-bd16-6b9761c2b41f-kube-api-access-ljlnf\") pod \"auto-csr-approver-29551366-x9zpn\" (UID: \"3153d57a-d24a-493f-bd16-6b9761c2b41f\") " pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.361430 4821 generic.go:334] "Generic (PLEG): container finished" podID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerID="3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373" exitCode=143 Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.361479 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"608af3f1-6a88-434c-add7-2fe7aa96974b","Type":"ContainerDied","Data":"3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373"} Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.362241 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljlnf\" (UniqueName: \"kubernetes.io/projected/3153d57a-d24a-493f-bd16-6b9761c2b41f-kube-api-access-ljlnf\") pod \"auto-csr-approver-29551366-x9zpn\" (UID: \"3153d57a-d24a-493f-bd16-6b9761c2b41f\") " pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.387110 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljlnf\" (UniqueName: \"kubernetes.io/projected/3153d57a-d24a-493f-bd16-6b9761c2b41f-kube-api-access-ljlnf\") pod \"auto-csr-approver-29551366-x9zpn\" (UID: \"3153d57a-d24a-493f-bd16-6b9761c2b41f\") " pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.488706 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:00 crc kubenswrapper[4821]: I0309 18:46:00.694959 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 18:46:00 crc kubenswrapper[4821]: W0309 18:46:00.727674 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95399cf0_2abf_4b19_9106_7f1489de365d.slice/crio-b3e41ddb375165a342391c989671ac182249a294d8386871e58317d2b37c8260 WatchSource:0}: Error finding container b3e41ddb375165a342391c989671ac182249a294d8386871e58317d2b37c8260: Status 404 returned error can't find the container with id b3e41ddb375165a342391c989671ac182249a294d8386871e58317d2b37c8260 Mar 09 18:46:00 crc kubenswrapper[4821]: E0309 18:46:00.868847 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 18:46:00 crc kubenswrapper[4821]: E0309 18:46:00.870363 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 18:46:00 crc kubenswrapper[4821]: E0309 18:46:00.874511 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 18:46:00 crc kubenswrapper[4821]: E0309 18:46:00.874580 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e9439e51-042e-4604-9368-b6e229dd141e" containerName="watcher-applier" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.000091 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551366-x9zpn"] Mar 09 18:46:01 crc kubenswrapper[4821]: W0309 18:46:01.005045 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3153d57a_d24a_493f_bd16_6b9761c2b41f.slice/crio-b53718a5587eab8ff7969f6bff4877bd66c87c8139ce3ffe94e6efc715c44875 WatchSource:0}: Error finding container b53718a5587eab8ff7969f6bff4877bd66c87c8139ce3ffe94e6efc715c44875: Status 404 returned error can't find the container with id b53718a5587eab8ff7969f6bff4877bd66c87c8139ce3ffe94e6efc715c44875 Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.065169 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.082732 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608af3f1-6a88-434c-add7-2fe7aa96974b-logs\") pod \"608af3f1-6a88-434c-add7-2fe7aa96974b\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.082831 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-custom-prometheus-ca\") pod \"608af3f1-6a88-434c-add7-2fe7aa96974b\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.082885 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-config-data\") pod \"608af3f1-6a88-434c-add7-2fe7aa96974b\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.082974 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js8jw\" (UniqueName: \"kubernetes.io/projected/608af3f1-6a88-434c-add7-2fe7aa96974b-kube-api-access-js8jw\") pod \"608af3f1-6a88-434c-add7-2fe7aa96974b\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.083004 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-combined-ca-bundle\") pod \"608af3f1-6a88-434c-add7-2fe7aa96974b\" (UID: \"608af3f1-6a88-434c-add7-2fe7aa96974b\") " Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.083433 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/608af3f1-6a88-434c-add7-2fe7aa96974b-logs" (OuterVolumeSpecName: "logs") pod "608af3f1-6a88-434c-add7-2fe7aa96974b" (UID: "608af3f1-6a88-434c-add7-2fe7aa96974b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.088790 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608af3f1-6a88-434c-add7-2fe7aa96974b-kube-api-access-js8jw" (OuterVolumeSpecName: "kube-api-access-js8jw") pod "608af3f1-6a88-434c-add7-2fe7aa96974b" (UID: "608af3f1-6a88-434c-add7-2fe7aa96974b"). InnerVolumeSpecName "kube-api-access-js8jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.138028 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "608af3f1-6a88-434c-add7-2fe7aa96974b" (UID: "608af3f1-6a88-434c-add7-2fe7aa96974b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.138408 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-config-data" (OuterVolumeSpecName: "config-data") pod "608af3f1-6a88-434c-add7-2fe7aa96974b" (UID: "608af3f1-6a88-434c-add7-2fe7aa96974b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.156462 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "608af3f1-6a88-434c-add7-2fe7aa96974b" (UID: "608af3f1-6a88-434c-add7-2fe7aa96974b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.184947 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js8jw\" (UniqueName: \"kubernetes.io/projected/608af3f1-6a88-434c-add7-2fe7aa96974b-kube-api-access-js8jw\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.184990 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.185008 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/608af3f1-6a88-434c-add7-2fe7aa96974b-logs\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.185020 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.185033 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/608af3f1-6a88-434c-add7-2fe7aa96974b-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.370718 4821 generic.go:334] "Generic (PLEG): container finished" podID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerID="be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633" exitCode=0 Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.370793 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"608af3f1-6a88-434c-add7-2fe7aa96974b","Type":"ContainerDied","Data":"be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633"} Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.370829 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"608af3f1-6a88-434c-add7-2fe7aa96974b","Type":"ContainerDied","Data":"dfea752d18658e755e02cc9f3af4758be64b1e061161b4be03634ae31fd1ea4f"} Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.370848 4821 scope.go:117] "RemoveContainer" containerID="be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.370971 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.374878 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerStarted","Data":"b3e41ddb375165a342391c989671ac182249a294d8386871e58317d2b37c8260"} Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.376414 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" event={"ID":"3153d57a-d24a-493f-bd16-6b9761c2b41f","Type":"ContainerStarted","Data":"b53718a5587eab8ff7969f6bff4877bd66c87c8139ce3ffe94e6efc715c44875"} Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.401860 4821 scope.go:117] "RemoveContainer" containerID="3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.410527 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.417895 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.447419 4821 scope.go:117] "RemoveContainer" containerID="be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633" Mar 09 18:46:01 crc kubenswrapper[4821]: E0309 18:46:01.448051 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633\": container with ID starting with be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633 not found: ID does not exist" containerID="be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.448083 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633"} err="failed to get container status \"be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633\": rpc error: code = NotFound desc = could not find container \"be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633\": container with ID starting with be2c451f83829a6d3178de1ef4a9cdffb52fea1df2b49c942114471c150de633 not found: ID does not exist" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.448125 4821 scope.go:117] "RemoveContainer" containerID="3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373" Mar 09 18:46:01 crc kubenswrapper[4821]: E0309 18:46:01.448377 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373\": container with ID starting with 3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373 not found: ID does not exist" containerID="3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.448402 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373"} err="failed to get container status \"3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373\": rpc error: code = NotFound desc = could not find container \"3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373\": container with ID starting with 3dcce34d4ea2766ad4231aa982341a6a56f2e42c98bb79193c5a62e47255f373 not found: ID does not exist" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.453960 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:46:01 crc kubenswrapper[4821]: E0309 18:46:01.454280 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-api" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.454296 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-api" Mar 09 18:46:01 crc kubenswrapper[4821]: E0309 18:46:01.454330 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-kuttl-api-log" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.454339 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-kuttl-api-log" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.454498 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-kuttl-api-log" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.454514 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-api" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.455674 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.458991 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.463780 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.489885 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.489929 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzjth\" (UniqueName: \"kubernetes.io/projected/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-kube-api-access-tzjth\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.490032 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.490053 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.490077 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.562286 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" path="/var/lib/kubelet/pods/608af3f1-6a88-434c-add7-2fe7aa96974b/volumes" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.563088 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8" path="/var/lib/kubelet/pods/d0e2f834-85c6-4c7f-bbfd-e9da005d7bd8/volumes" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.591144 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.591207 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.591234 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.591274 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.591295 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzjth\" (UniqueName: \"kubernetes.io/projected/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-kube-api-access-tzjth\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.591707 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.595920 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.596782 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.596962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.609356 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzjth\" (UniqueName: \"kubernetes.io/projected/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-kube-api-access-tzjth\") pod \"watcher-kuttl-api-0\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:01 crc kubenswrapper[4821]: I0309 18:46:01.787209 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:02 crc kubenswrapper[4821]: I0309 18:46:02.258349 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 18:46:02 crc kubenswrapper[4821]: I0309 18:46:02.389292 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerStarted","Data":"ec6d25b74630de1a6a2dc5a2df4d4e222c04110cc1f5ac20bd5a1ec7e2b9f83a"} Mar 09 18:46:02 crc kubenswrapper[4821]: I0309 18:46:02.389353 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerStarted","Data":"5856ee8c84318b7822d4a408fdfba7e86301a35f27aa57c7f33722e9c82e2e34"} Mar 09 18:46:02 crc kubenswrapper[4821]: I0309 18:46:02.393194 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" event={"ID":"3153d57a-d24a-493f-bd16-6b9761c2b41f","Type":"ContainerStarted","Data":"954f111b9ce725bd85b3e557135c2d410c1245cc35abe8a484e6a366abcebd65"} Mar 09 18:46:02 crc kubenswrapper[4821]: I0309 18:46:02.394474 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82","Type":"ContainerStarted","Data":"2c4129846ee7c55b87663b1147a627a3789997404188a65b33b738ccce1104ff"} Mar 09 18:46:02 crc kubenswrapper[4821]: I0309 18:46:02.419769 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" podStartSLOduration=1.455502135 podStartE2EDuration="2.419746358s" podCreationTimestamp="2026-03-09 18:46:00 +0000 UTC" firstStartedPulling="2026-03-09 18:46:01.00777516 +0000 UTC m=+1298.169151016" lastFinishedPulling="2026-03-09 18:46:01.972019383 +0000 UTC m=+1299.133395239" observedRunningTime="2026-03-09 18:46:02.409743466 +0000 UTC m=+1299.571119322" watchObservedRunningTime="2026-03-09 18:46:02.419746358 +0000 UTC m=+1299.581122234" Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.420400 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerStarted","Data":"66f408ca3542f07a9b783ef8157e77604b3ab128b0d8f427567d1de8560f7821"} Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.421973 4821 generic.go:334] "Generic (PLEG): container finished" podID="3153d57a-d24a-493f-bd16-6b9761c2b41f" containerID="954f111b9ce725bd85b3e557135c2d410c1245cc35abe8a484e6a366abcebd65" exitCode=0 Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.422361 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" event={"ID":"3153d57a-d24a-493f-bd16-6b9761c2b41f","Type":"ContainerDied","Data":"954f111b9ce725bd85b3e557135c2d410c1245cc35abe8a484e6a366abcebd65"} Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.424382 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82","Type":"ContainerStarted","Data":"1b9302b570efd6dc5095f283b7eba86587f502a40e0bb2b73a878e30cec22beb"} Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.424427 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82","Type":"ContainerStarted","Data":"80885cb982c8254a9fc57e3a190bc0c1a82dd6b598b9c86d12c04e894e74615b"} Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.425129 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:03 crc kubenswrapper[4821]: I0309 18:46:03.467538 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.467516044 podStartE2EDuration="2.467516044s" podCreationTimestamp="2026-03-09 18:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:46:03.458545429 +0000 UTC m=+1300.619921285" watchObservedRunningTime="2026-03-09 18:46:03.467516044 +0000 UTC m=+1300.628891900" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.435703 4821 generic.go:334] "Generic (PLEG): container finished" podID="e9439e51-042e-4604-9368-b6e229dd141e" containerID="4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3" exitCode=0 Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.435810 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e9439e51-042e-4604-9368-b6e229dd141e","Type":"ContainerDied","Data":"4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3"} Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.703794 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.744354 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wfv4\" (UniqueName: \"kubernetes.io/projected/e9439e51-042e-4604-9368-b6e229dd141e-kube-api-access-9wfv4\") pod \"e9439e51-042e-4604-9368-b6e229dd141e\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.744550 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9439e51-042e-4604-9368-b6e229dd141e-logs\") pod \"e9439e51-042e-4604-9368-b6e229dd141e\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.744682 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-config-data\") pod \"e9439e51-042e-4604-9368-b6e229dd141e\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.744734 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-combined-ca-bundle\") pod \"e9439e51-042e-4604-9368-b6e229dd141e\" (UID: \"e9439e51-042e-4604-9368-b6e229dd141e\") " Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.745734 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9439e51-042e-4604-9368-b6e229dd141e-logs" (OuterVolumeSpecName: "logs") pod "e9439e51-042e-4604-9368-b6e229dd141e" (UID: "e9439e51-042e-4604-9368-b6e229dd141e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.759581 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9439e51-042e-4604-9368-b6e229dd141e-kube-api-access-9wfv4" (OuterVolumeSpecName: "kube-api-access-9wfv4") pod "e9439e51-042e-4604-9368-b6e229dd141e" (UID: "e9439e51-042e-4604-9368-b6e229dd141e"). InnerVolumeSpecName "kube-api-access-9wfv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.775525 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9439e51-042e-4604-9368-b6e229dd141e" (UID: "e9439e51-042e-4604-9368-b6e229dd141e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.822717 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-config-data" (OuterVolumeSpecName: "config-data") pod "e9439e51-042e-4604-9368-b6e229dd141e" (UID: "e9439e51-042e-4604-9368-b6e229dd141e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.846897 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wfv4\" (UniqueName: \"kubernetes.io/projected/e9439e51-042e-4604-9368-b6e229dd141e-kube-api-access-9wfv4\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.846937 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9439e51-042e-4604-9368-b6e229dd141e-logs\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.846951 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.846960 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9439e51-042e-4604-9368-b6e229dd141e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:04 crc kubenswrapper[4821]: I0309 18:46:04.878694 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.053962 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljlnf\" (UniqueName: \"kubernetes.io/projected/3153d57a-d24a-493f-bd16-6b9761c2b41f-kube-api-access-ljlnf\") pod \"3153d57a-d24a-493f-bd16-6b9761c2b41f\" (UID: \"3153d57a-d24a-493f-bd16-6b9761c2b41f\") " Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.062741 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3153d57a-d24a-493f-bd16-6b9761c2b41f-kube-api-access-ljlnf" (OuterVolumeSpecName: "kube-api-access-ljlnf") pod "3153d57a-d24a-493f-bd16-6b9761c2b41f" (UID: "3153d57a-d24a-493f-bd16-6b9761c2b41f"). InnerVolumeSpecName "kube-api-access-ljlnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.159846 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljlnf\" (UniqueName: \"kubernetes.io/projected/3153d57a-d24a-493f-bd16-6b9761c2b41f-kube-api-access-ljlnf\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.277684 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.444796 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.444784 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e9439e51-042e-4604-9368-b6e229dd141e","Type":"ContainerDied","Data":"6ae38b7d13a2cf1b0ab36722bad81c19701f3991fc815cd5639feaba8d8dcaa5"} Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.444963 4821 scope.go:117] "RemoveContainer" containerID="4ea0ecbdf32747ad2c810a775f1247953a9f231e393641d12b4ee835df2641f3" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.453727 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerStarted","Data":"e809de64d6164d1576f4701075c9609befd0949802c46c0dada9621a77b07c57"} Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.453865 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.459875 4821 generic.go:334] "Generic (PLEG): container finished" podID="65318c1d-df52-4dcf-873f-a76c7edcdeae" containerID="5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6" exitCode=0 Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.459941 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"65318c1d-df52-4dcf-873f-a76c7edcdeae","Type":"ContainerDied","Data":"5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6"} Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.459967 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"65318c1d-df52-4dcf-873f-a76c7edcdeae","Type":"ContainerDied","Data":"0b9ec0f60bacb35eb3b8e0e9a96d8425a27f5f5d4b945b0cafc63ae1d15afe1b"} Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.460018 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.461851 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" event={"ID":"3153d57a-d24a-493f-bd16-6b9761c2b41f","Type":"ContainerDied","Data":"b53718a5587eab8ff7969f6bff4877bd66c87c8139ce3ffe94e6efc715c44875"} Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.461876 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53718a5587eab8ff7969f6bff4877bd66c87c8139ce3ffe94e6efc715c44875" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.461879 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551366-x9zpn" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.470052 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-custom-prometheus-ca\") pod \"65318c1d-df52-4dcf-873f-a76c7edcdeae\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.470417 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-combined-ca-bundle\") pod \"65318c1d-df52-4dcf-873f-a76c7edcdeae\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.470481 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-config-data\") pod \"65318c1d-df52-4dcf-873f-a76c7edcdeae\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.470513 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65318c1d-df52-4dcf-873f-a76c7edcdeae-logs\") pod \"65318c1d-df52-4dcf-873f-a76c7edcdeae\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.470612 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qbx7\" (UniqueName: \"kubernetes.io/projected/65318c1d-df52-4dcf-873f-a76c7edcdeae-kube-api-access-8qbx7\") pod \"65318c1d-df52-4dcf-873f-a76c7edcdeae\" (UID: \"65318c1d-df52-4dcf-873f-a76c7edcdeae\") " Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.471418 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65318c1d-df52-4dcf-873f-a76c7edcdeae-logs" (OuterVolumeSpecName: "logs") pod "65318c1d-df52-4dcf-873f-a76c7edcdeae" (UID: "65318c1d-df52-4dcf-873f-a76c7edcdeae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.479527 4821 scope.go:117] "RemoveContainer" containerID="5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.485065 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.3976159 podStartE2EDuration="6.485046602s" podCreationTimestamp="2026-03-09 18:45:59 +0000 UTC" firstStartedPulling="2026-03-09 18:46:00.733527177 +0000 UTC m=+1297.894903033" lastFinishedPulling="2026-03-09 18:46:04.820957879 +0000 UTC m=+1301.982333735" observedRunningTime="2026-03-09 18:46:05.478759891 +0000 UTC m=+1302.640135747" watchObservedRunningTime="2026-03-09 18:46:05.485046602 +0000 UTC m=+1302.646422458" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.487009 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65318c1d-df52-4dcf-873f-a76c7edcdeae-kube-api-access-8qbx7" (OuterVolumeSpecName: "kube-api-access-8qbx7") pod "65318c1d-df52-4dcf-873f-a76c7edcdeae" (UID: "65318c1d-df52-4dcf-873f-a76c7edcdeae"). InnerVolumeSpecName "kube-api-access-8qbx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.513523 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65318c1d-df52-4dcf-873f-a76c7edcdeae" (UID: "65318c1d-df52-4dcf-873f-a76c7edcdeae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.517646 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "65318c1d-df52-4dcf-873f-a76c7edcdeae" (UID: "65318c1d-df52-4dcf-873f-a76c7edcdeae"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.534067 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551360-xr5rh"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.546220 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551360-xr5rh"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.567250 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c2b643-431a-412d-9386-384fa8ccd6e9" path="/var/lib/kubelet/pods/85c2b643-431a-412d-9386-384fa8ccd6e9/volumes" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.572464 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qbx7\" (UniqueName: \"kubernetes.io/projected/65318c1d-df52-4dcf-873f-a76c7edcdeae-kube-api-access-8qbx7\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.572489 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.572498 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.572507 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65318c1d-df52-4dcf-873f-a76c7edcdeae-logs\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.575004 4821 scope.go:117] "RemoveContainer" containerID="5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.577639 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: E0309 18:46:05.581643 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6\": container with ID starting with 5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6 not found: ID does not exist" containerID="5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.581702 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6"} err="failed to get container status \"5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6\": rpc error: code = NotFound desc = could not find container \"5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6\": container with ID starting with 5ae464351ab8844e65911aec0384e8a54739c93960f711a7dcaea5df8601dad6 not found: ID does not exist" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.588115 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-config-data" (OuterVolumeSpecName: "config-data") pod "65318c1d-df52-4dcf-873f-a76c7edcdeae" (UID: "65318c1d-df52-4dcf-873f-a76c7edcdeae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.591524 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596365 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: E0309 18:46:05.596740 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3153d57a-d24a-493f-bd16-6b9761c2b41f" containerName="oc" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596753 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3153d57a-d24a-493f-bd16-6b9761c2b41f" containerName="oc" Mar 09 18:46:05 crc kubenswrapper[4821]: E0309 18:46:05.596776 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65318c1d-df52-4dcf-873f-a76c7edcdeae" containerName="watcher-decision-engine" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596782 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="65318c1d-df52-4dcf-873f-a76c7edcdeae" containerName="watcher-decision-engine" Mar 09 18:46:05 crc kubenswrapper[4821]: E0309 18:46:05.596795 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9439e51-042e-4604-9368-b6e229dd141e" containerName="watcher-applier" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596801 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9439e51-042e-4604-9368-b6e229dd141e" containerName="watcher-applier" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596939 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9439e51-042e-4604-9368-b6e229dd141e" containerName="watcher-applier" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596951 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3153d57a-d24a-493f-bd16-6b9761c2b41f" containerName="oc" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.596961 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="65318c1d-df52-4dcf-873f-a76c7edcdeae" containerName="watcher-decision-engine" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.605718 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.608617 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.637698 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.673780 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65318c1d-df52-4dcf-873f-a76c7edcdeae-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.774887 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmkc\" (UniqueName: \"kubernetes.io/projected/50cca3dd-5fcd-4577-9442-2952486769ba-kube-api-access-7nmkc\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.774936 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.774969 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50cca3dd-5fcd-4577-9442-2952486769ba-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.775028 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.792968 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.801124 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.824097 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.825162 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.827692 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.836006 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.876679 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nmkc\" (UniqueName: \"kubernetes.io/projected/50cca3dd-5fcd-4577-9442-2952486769ba-kube-api-access-7nmkc\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.876727 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.876757 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50cca3dd-5fcd-4577-9442-2952486769ba-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.876792 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.877573 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50cca3dd-5fcd-4577-9442-2952486769ba-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.880855 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.881154 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.890778 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nmkc\" (UniqueName: \"kubernetes.io/projected/50cca3dd-5fcd-4577-9442-2952486769ba-kube-api-access-7nmkc\") pod \"watcher-kuttl-applier-0\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.900503 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.138:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.900630 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="608af3f1-6a88-434c-add7-2fe7aa96974b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.138:9322/\": dial tcp 10.217.0.138:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.939927 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.978395 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.978471 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.978525 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.978549 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:05 crc kubenswrapper[4821]: I0309 18:46:05.978572 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qdd\" (UniqueName: \"kubernetes.io/projected/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-kube-api-access-84qdd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.080649 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.081023 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.081060 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qdd\" (UniqueName: \"kubernetes.io/projected/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-kube-api-access-84qdd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.081161 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.081210 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.083994 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.084642 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.084743 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.085063 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.102559 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qdd\" (UniqueName: \"kubernetes.io/projected/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-kube-api-access-84qdd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.136961 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.401976 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 18:46:06 crc kubenswrapper[4821]: W0309 18:46:06.405463 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50cca3dd_5fcd_4577_9442_2952486769ba.slice/crio-189094ffbe9c1e67e76e59f46c5db9497e1713c31863ac48666c043dbaeecb47 WatchSource:0}: Error finding container 189094ffbe9c1e67e76e59f46c5db9497e1713c31863ac48666c043dbaeecb47: Status 404 returned error can't find the container with id 189094ffbe9c1e67e76e59f46c5db9497e1713c31863ac48666c043dbaeecb47 Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.471368 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"50cca3dd-5fcd-4577-9442-2952486769ba","Type":"ContainerStarted","Data":"189094ffbe9c1e67e76e59f46c5db9497e1713c31863ac48666c043dbaeecb47"} Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.523217 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:06 crc kubenswrapper[4821]: W0309 18:46:06.568393 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18c92d7c_9dd7_43eb_98c2_15f30ac6bc7e.slice/crio-0811823c5b643d4b423178192b26d6fd94b356e500611edf24485e58949c1e96 WatchSource:0}: Error finding container 0811823c5b643d4b423178192b26d6fd94b356e500611edf24485e58949c1e96: Status 404 returned error can't find the container with id 0811823c5b643d4b423178192b26d6fd94b356e500611edf24485e58949c1e96 Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.572278 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 18:46:06 crc kubenswrapper[4821]: I0309 18:46:06.787378 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.484163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"50cca3dd-5fcd-4577-9442-2952486769ba","Type":"ContainerStarted","Data":"8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca"} Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.486750 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e","Type":"ContainerStarted","Data":"a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c"} Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.486774 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e","Type":"ContainerStarted","Data":"0811823c5b643d4b423178192b26d6fd94b356e500611edf24485e58949c1e96"} Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.505411 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.505303985 podStartE2EDuration="2.505303985s" podCreationTimestamp="2026-03-09 18:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:46:07.499870107 +0000 UTC m=+1304.661245963" watchObservedRunningTime="2026-03-09 18:46:07.505303985 +0000 UTC m=+1304.666679841" Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.520569 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.520549539 podStartE2EDuration="2.520549539s" podCreationTimestamp="2026-03-09 18:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 18:46:07.51728516 +0000 UTC m=+1304.678661006" watchObservedRunningTime="2026-03-09 18:46:07.520549539 +0000 UTC m=+1304.681925395" Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.560380 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65318c1d-df52-4dcf-873f-a76c7edcdeae" path="/var/lib/kubelet/pods/65318c1d-df52-4dcf-873f-a76c7edcdeae/volumes" Mar 09 18:46:07 crc kubenswrapper[4821]: I0309 18:46:07.560918 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9439e51-042e-4604-9368-b6e229dd141e" path="/var/lib/kubelet/pods/e9439e51-042e-4604-9368-b6e229dd141e/volumes" Mar 09 18:46:10 crc kubenswrapper[4821]: I0309 18:46:10.940921 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:11 crc kubenswrapper[4821]: I0309 18:46:11.787949 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:11 crc kubenswrapper[4821]: I0309 18:46:11.795487 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:12 crc kubenswrapper[4821]: I0309 18:46:12.549643 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 18:46:15 crc kubenswrapper[4821]: I0309 18:46:15.941166 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:15 crc kubenswrapper[4821]: I0309 18:46:15.978680 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:16 crc kubenswrapper[4821]: I0309 18:46:16.137801 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:16 crc kubenswrapper[4821]: I0309 18:46:16.173416 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:16 crc kubenswrapper[4821]: I0309 18:46:16.581170 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:16 crc kubenswrapper[4821]: I0309 18:46:16.630623 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 18:46:16 crc kubenswrapper[4821]: I0309 18:46:16.635384 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 18:46:29 crc kubenswrapper[4821]: I0309 18:46:29.914239 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:46:29 crc kubenswrapper[4821]: I0309 18:46:29.914882 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:46:30 crc kubenswrapper[4821]: I0309 18:46:30.090256 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 18:46:46 crc kubenswrapper[4821]: I0309 18:46:46.307011 4821 scope.go:117] "RemoveContainer" containerID="b122d014c4241d8c35ad69961f137cf1594e28435e37a845177d149c7b747022" Mar 09 18:46:46 crc kubenswrapper[4821]: I0309 18:46:46.331900 4821 scope.go:117] "RemoveContainer" containerID="601e2f61c235d09dd59cfe8d70f0a79bdc357ece3132a5c45ea484312327a91d" Mar 09 18:46:59 crc kubenswrapper[4821]: I0309 18:46:59.914345 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:46:59 crc kubenswrapper[4821]: I0309 18:46:59.914909 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:47:29 crc kubenswrapper[4821]: I0309 18:47:29.914218 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:47:29 crc kubenswrapper[4821]: I0309 18:47:29.914830 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:47:29 crc kubenswrapper[4821]: I0309 18:47:29.914890 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:47:29 crc kubenswrapper[4821]: I0309 18:47:29.915803 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7abf213239bc2e4cfdfaa92f1f80f4d716d029c1e408736f60b7c4d3d559952"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:47:29 crc kubenswrapper[4821]: I0309 18:47:29.915894 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://f7abf213239bc2e4cfdfaa92f1f80f4d716d029c1e408736f60b7c4d3d559952" gracePeriod=600 Mar 09 18:47:30 crc kubenswrapper[4821]: I0309 18:47:30.177756 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="f7abf213239bc2e4cfdfaa92f1f80f4d716d029c1e408736f60b7c4d3d559952" exitCode=0 Mar 09 18:47:30 crc kubenswrapper[4821]: I0309 18:47:30.177893 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"f7abf213239bc2e4cfdfaa92f1f80f4d716d029c1e408736f60b7c4d3d559952"} Mar 09 18:47:30 crc kubenswrapper[4821]: I0309 18:47:30.178414 4821 scope.go:117] "RemoveContainer" containerID="7d710c3d6413f5c12f3ff46fd212f945ef078be160c195d5feeac05d83b7fb9e" Mar 09 18:47:31 crc kubenswrapper[4821]: I0309 18:47:31.188026 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"f5a172990046c01ffb98709622d17f70e9aa1b883bb21cac9e356cbc1c725a0e"} Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.503553 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m9gq9"] Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.507392 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.530676 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m9gq9"] Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.631029 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-utilities\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.631482 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-catalog-content\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.631664 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sn7q\" (UniqueName: \"kubernetes.io/projected/e0a61f47-13f9-46b6-943b-06764f0829f7-kube-api-access-4sn7q\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.733399 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-utilities\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.733699 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-catalog-content\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.733790 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sn7q\" (UniqueName: \"kubernetes.io/projected/e0a61f47-13f9-46b6-943b-06764f0829f7-kube-api-access-4sn7q\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.734132 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-utilities\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.734180 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-catalog-content\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.765565 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sn7q\" (UniqueName: \"kubernetes.io/projected/e0a61f47-13f9-46b6-943b-06764f0829f7-kube-api-access-4sn7q\") pod \"redhat-operators-m9gq9\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:56 crc kubenswrapper[4821]: I0309 18:47:56.835697 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:47:57 crc kubenswrapper[4821]: I0309 18:47:57.336965 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m9gq9"] Mar 09 18:47:57 crc kubenswrapper[4821]: I0309 18:47:57.436284 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerStarted","Data":"43789880d735dadf3306de980337ad50882301faae9275bd93bb4fe472450c5f"} Mar 09 18:47:58 crc kubenswrapper[4821]: I0309 18:47:58.444174 4821 generic.go:334] "Generic (PLEG): container finished" podID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerID="2b27cef12376febec7cd7bbba1822d57b6a118181c664711903e5e1936221845" exitCode=0 Mar 09 18:47:58 crc kubenswrapper[4821]: I0309 18:47:58.444228 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerDied","Data":"2b27cef12376febec7cd7bbba1822d57b6a118181c664711903e5e1936221845"} Mar 09 18:47:59 crc kubenswrapper[4821]: I0309 18:47:59.452624 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerStarted","Data":"95596a8aa400ed1e408a8802c1adfa9502687fdfb0410706141405972fc06915"} Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.139277 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551368-j84z2"] Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.141203 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.144300 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.144454 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.144527 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.148994 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551368-j84z2"] Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.189186 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjck\" (UniqueName: \"kubernetes.io/projected/b095c716-ece8-4ee9-af0e-6f9778764b02-kube-api-access-chjck\") pod \"auto-csr-approver-29551368-j84z2\" (UID: \"b095c716-ece8-4ee9-af0e-6f9778764b02\") " pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.290489 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjck\" (UniqueName: \"kubernetes.io/projected/b095c716-ece8-4ee9-af0e-6f9778764b02-kube-api-access-chjck\") pod \"auto-csr-approver-29551368-j84z2\" (UID: \"b095c716-ece8-4ee9-af0e-6f9778764b02\") " pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.313010 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjck\" (UniqueName: \"kubernetes.io/projected/b095c716-ece8-4ee9-af0e-6f9778764b02-kube-api-access-chjck\") pod \"auto-csr-approver-29551368-j84z2\" (UID: \"b095c716-ece8-4ee9-af0e-6f9778764b02\") " pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.457819 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.465877 4821 generic.go:334] "Generic (PLEG): container finished" podID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerID="95596a8aa400ed1e408a8802c1adfa9502687fdfb0410706141405972fc06915" exitCode=0 Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.465975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerDied","Data":"95596a8aa400ed1e408a8802c1adfa9502687fdfb0410706141405972fc06915"} Mar 09 18:48:00 crc kubenswrapper[4821]: I0309 18:48:00.945468 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551368-j84z2"] Mar 09 18:48:00 crc kubenswrapper[4821]: W0309 18:48:00.949572 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb095c716_ece8_4ee9_af0e_6f9778764b02.slice/crio-59b57bce9fa9fee8c60ccdb739e48beb62752f0a1ae047e17fcdc6a10af1961a WatchSource:0}: Error finding container 59b57bce9fa9fee8c60ccdb739e48beb62752f0a1ae047e17fcdc6a10af1961a: Status 404 returned error can't find the container with id 59b57bce9fa9fee8c60ccdb739e48beb62752f0a1ae047e17fcdc6a10af1961a Mar 09 18:48:01 crc kubenswrapper[4821]: I0309 18:48:01.478555 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551368-j84z2" event={"ID":"b095c716-ece8-4ee9-af0e-6f9778764b02","Type":"ContainerStarted","Data":"59b57bce9fa9fee8c60ccdb739e48beb62752f0a1ae047e17fcdc6a10af1961a"} Mar 09 18:48:02 crc kubenswrapper[4821]: I0309 18:48:02.488519 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerStarted","Data":"6dd6c5932ff39d8b3de899eda9a26fbf1c80af127a268d40b0156960a79fc9e7"} Mar 09 18:48:02 crc kubenswrapper[4821]: I0309 18:48:02.490632 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551368-j84z2" event={"ID":"b095c716-ece8-4ee9-af0e-6f9778764b02","Type":"ContainerStarted","Data":"9291ce03bff390325369731633ef6444fa49b6adbba43dd70cd13ecfa905a257"} Mar 09 18:48:02 crc kubenswrapper[4821]: I0309 18:48:02.512432 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m9gq9" podStartSLOduration=3.897215022 podStartE2EDuration="6.512410584s" podCreationTimestamp="2026-03-09 18:47:56 +0000 UTC" firstStartedPulling="2026-03-09 18:47:58.446239319 +0000 UTC m=+1415.607615175" lastFinishedPulling="2026-03-09 18:48:01.061434881 +0000 UTC m=+1418.222810737" observedRunningTime="2026-03-09 18:48:02.510225324 +0000 UTC m=+1419.671601170" watchObservedRunningTime="2026-03-09 18:48:02.512410584 +0000 UTC m=+1419.673786440" Mar 09 18:48:02 crc kubenswrapper[4821]: I0309 18:48:02.533291 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551368-j84z2" podStartSLOduration=1.384629801 podStartE2EDuration="2.533273318s" podCreationTimestamp="2026-03-09 18:48:00 +0000 UTC" firstStartedPulling="2026-03-09 18:48:00.952916103 +0000 UTC m=+1418.114291969" lastFinishedPulling="2026-03-09 18:48:02.10155959 +0000 UTC m=+1419.262935486" observedRunningTime="2026-03-09 18:48:02.526146115 +0000 UTC m=+1419.687521991" watchObservedRunningTime="2026-03-09 18:48:02.533273318 +0000 UTC m=+1419.694649174" Mar 09 18:48:03 crc kubenswrapper[4821]: I0309 18:48:03.501085 4821 generic.go:334] "Generic (PLEG): container finished" podID="b095c716-ece8-4ee9-af0e-6f9778764b02" containerID="9291ce03bff390325369731633ef6444fa49b6adbba43dd70cd13ecfa905a257" exitCode=0 Mar 09 18:48:03 crc kubenswrapper[4821]: I0309 18:48:03.501158 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551368-j84z2" event={"ID":"b095c716-ece8-4ee9-af0e-6f9778764b02","Type":"ContainerDied","Data":"9291ce03bff390325369731633ef6444fa49b6adbba43dd70cd13ecfa905a257"} Mar 09 18:48:04 crc kubenswrapper[4821]: I0309 18:48:04.842490 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:04 crc kubenswrapper[4821]: I0309 18:48:04.971819 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjck\" (UniqueName: \"kubernetes.io/projected/b095c716-ece8-4ee9-af0e-6f9778764b02-kube-api-access-chjck\") pod \"b095c716-ece8-4ee9-af0e-6f9778764b02\" (UID: \"b095c716-ece8-4ee9-af0e-6f9778764b02\") " Mar 09 18:48:04 crc kubenswrapper[4821]: I0309 18:48:04.979562 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b095c716-ece8-4ee9-af0e-6f9778764b02-kube-api-access-chjck" (OuterVolumeSpecName: "kube-api-access-chjck") pod "b095c716-ece8-4ee9-af0e-6f9778764b02" (UID: "b095c716-ece8-4ee9-af0e-6f9778764b02"). InnerVolumeSpecName "kube-api-access-chjck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:48:05 crc kubenswrapper[4821]: I0309 18:48:05.074191 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjck\" (UniqueName: \"kubernetes.io/projected/b095c716-ece8-4ee9-af0e-6f9778764b02-kube-api-access-chjck\") on node \"crc\" DevicePath \"\"" Mar 09 18:48:05 crc kubenswrapper[4821]: I0309 18:48:05.517848 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551368-j84z2" event={"ID":"b095c716-ece8-4ee9-af0e-6f9778764b02","Type":"ContainerDied","Data":"59b57bce9fa9fee8c60ccdb739e48beb62752f0a1ae047e17fcdc6a10af1961a"} Mar 09 18:48:05 crc kubenswrapper[4821]: I0309 18:48:05.517901 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b57bce9fa9fee8c60ccdb739e48beb62752f0a1ae047e17fcdc6a10af1961a" Mar 09 18:48:05 crc kubenswrapper[4821]: I0309 18:48:05.517974 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551368-j84z2" Mar 09 18:48:05 crc kubenswrapper[4821]: I0309 18:48:05.599972 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551362-4l6b5"] Mar 09 18:48:05 crc kubenswrapper[4821]: I0309 18:48:05.606252 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551362-4l6b5"] Mar 09 18:48:06 crc kubenswrapper[4821]: I0309 18:48:06.836348 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:48:06 crc kubenswrapper[4821]: I0309 18:48:06.836658 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:48:07 crc kubenswrapper[4821]: I0309 18:48:07.563889 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe98840-dd4e-4195-9627-71f679ccbeea" path="/var/lib/kubelet/pods/dbe98840-dd4e-4195-9627-71f679ccbeea/volumes" Mar 09 18:48:07 crc kubenswrapper[4821]: I0309 18:48:07.906537 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m9gq9" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="registry-server" probeResult="failure" output=< Mar 09 18:48:07 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 18:48:07 crc kubenswrapper[4821]: > Mar 09 18:48:16 crc kubenswrapper[4821]: I0309 18:48:16.892891 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:48:16 crc kubenswrapper[4821]: I0309 18:48:16.952812 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.478649 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m9gq9"] Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.479126 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m9gq9" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="registry-server" containerID="cri-o://6dd6c5932ff39d8b3de899eda9a26fbf1c80af127a268d40b0156960a79fc9e7" gracePeriod=2 Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.647614 4821 generic.go:334] "Generic (PLEG): container finished" podID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerID="6dd6c5932ff39d8b3de899eda9a26fbf1c80af127a268d40b0156960a79fc9e7" exitCode=0 Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.647681 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerDied","Data":"6dd6c5932ff39d8b3de899eda9a26fbf1c80af127a268d40b0156960a79fc9e7"} Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.904514 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.941955 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-utilities\") pod \"e0a61f47-13f9-46b6-943b-06764f0829f7\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.942058 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-catalog-content\") pod \"e0a61f47-13f9-46b6-943b-06764f0829f7\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.942180 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sn7q\" (UniqueName: \"kubernetes.io/projected/e0a61f47-13f9-46b6-943b-06764f0829f7-kube-api-access-4sn7q\") pod \"e0a61f47-13f9-46b6-943b-06764f0829f7\" (UID: \"e0a61f47-13f9-46b6-943b-06764f0829f7\") " Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.943082 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-utilities" (OuterVolumeSpecName: "utilities") pod "e0a61f47-13f9-46b6-943b-06764f0829f7" (UID: "e0a61f47-13f9-46b6-943b-06764f0829f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:48:20 crc kubenswrapper[4821]: I0309 18:48:20.949582 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a61f47-13f9-46b6-943b-06764f0829f7-kube-api-access-4sn7q" (OuterVolumeSpecName: "kube-api-access-4sn7q") pod "e0a61f47-13f9-46b6-943b-06764f0829f7" (UID: "e0a61f47-13f9-46b6-943b-06764f0829f7"). InnerVolumeSpecName "kube-api-access-4sn7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.043999 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sn7q\" (UniqueName: \"kubernetes.io/projected/e0a61f47-13f9-46b6-943b-06764f0829f7-kube-api-access-4sn7q\") on node \"crc\" DevicePath \"\"" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.044033 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.071004 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0a61f47-13f9-46b6-943b-06764f0829f7" (UID: "e0a61f47-13f9-46b6-943b-06764f0829f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.145185 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a61f47-13f9-46b6-943b-06764f0829f7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.655599 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9gq9" event={"ID":"e0a61f47-13f9-46b6-943b-06764f0829f7","Type":"ContainerDied","Data":"43789880d735dadf3306de980337ad50882301faae9275bd93bb4fe472450c5f"} Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.655662 4821 scope.go:117] "RemoveContainer" containerID="6dd6c5932ff39d8b3de899eda9a26fbf1c80af127a268d40b0156960a79fc9e7" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.655665 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9gq9" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.678537 4821 scope.go:117] "RemoveContainer" containerID="95596a8aa400ed1e408a8802c1adfa9502687fdfb0410706141405972fc06915" Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.678882 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m9gq9"] Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.688788 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m9gq9"] Mar 09 18:48:21 crc kubenswrapper[4821]: I0309 18:48:21.696825 4821 scope.go:117] "RemoveContainer" containerID="2b27cef12376febec7cd7bbba1822d57b6a118181c664711903e5e1936221845" Mar 09 18:48:23 crc kubenswrapper[4821]: I0309 18:48:23.569571 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" path="/var/lib/kubelet/pods/e0a61f47-13f9-46b6-943b-06764f0829f7/volumes" Mar 09 18:48:46 crc kubenswrapper[4821]: I0309 18:48:46.496923 4821 scope.go:117] "RemoveContainer" containerID="292d194dec3c2b376499143dee89f028951fbfaeff19f0f3f57efcbf39d62f2b" Mar 09 18:49:59 crc kubenswrapper[4821]: I0309 18:49:59.913912 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:49:59 crc kubenswrapper[4821]: I0309 18:49:59.914369 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.143587 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551370-m8phc"] Mar 09 18:50:00 crc kubenswrapper[4821]: E0309 18:50:00.144189 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="registry-server" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.144292 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="registry-server" Mar 09 18:50:00 crc kubenswrapper[4821]: E0309 18:50:00.144435 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b095c716-ece8-4ee9-af0e-6f9778764b02" containerName="oc" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.144553 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b095c716-ece8-4ee9-af0e-6f9778764b02" containerName="oc" Mar 09 18:50:00 crc kubenswrapper[4821]: E0309 18:50:00.144643 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="extract-utilities" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.144721 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="extract-utilities" Mar 09 18:50:00 crc kubenswrapper[4821]: E0309 18:50:00.144818 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="extract-content" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.144898 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="extract-content" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.145181 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b095c716-ece8-4ee9-af0e-6f9778764b02" containerName="oc" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.145277 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a61f47-13f9-46b6-943b-06764f0829f7" containerName="registry-server" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.146025 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.148336 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.148397 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.149755 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.152572 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551370-m8phc"] Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.338894 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwcr\" (UniqueName: \"kubernetes.io/projected/a24d26da-b3a7-4e07-a176-a690eec98e40-kube-api-access-4mwcr\") pod \"auto-csr-approver-29551370-m8phc\" (UID: \"a24d26da-b3a7-4e07-a176-a690eec98e40\") " pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.440725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwcr\" (UniqueName: \"kubernetes.io/projected/a24d26da-b3a7-4e07-a176-a690eec98e40-kube-api-access-4mwcr\") pod \"auto-csr-approver-29551370-m8phc\" (UID: \"a24d26da-b3a7-4e07-a176-a690eec98e40\") " pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.473116 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwcr\" (UniqueName: \"kubernetes.io/projected/a24d26da-b3a7-4e07-a176-a690eec98e40-kube-api-access-4mwcr\") pod \"auto-csr-approver-29551370-m8phc\" (UID: \"a24d26da-b3a7-4e07-a176-a690eec98e40\") " pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:00 crc kubenswrapper[4821]: I0309 18:50:00.764511 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:01 crc kubenswrapper[4821]: I0309 18:50:01.199439 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551370-m8phc"] Mar 09 18:50:01 crc kubenswrapper[4821]: I0309 18:50:01.209294 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 18:50:01 crc kubenswrapper[4821]: I0309 18:50:01.731041 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551370-m8phc" event={"ID":"a24d26da-b3a7-4e07-a176-a690eec98e40","Type":"ContainerStarted","Data":"af929173a6e54bfdf0823b5551f2cace5321ec46c393f7fe279f3d865b1df964"} Mar 09 18:50:03 crc kubenswrapper[4821]: I0309 18:50:03.745059 4821 generic.go:334] "Generic (PLEG): container finished" podID="a24d26da-b3a7-4e07-a176-a690eec98e40" containerID="eedf43a31728d687a5a8326336dfa3af538712ed0c4e70e9e145c2897334016d" exitCode=0 Mar 09 18:50:03 crc kubenswrapper[4821]: I0309 18:50:03.745130 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551370-m8phc" event={"ID":"a24d26da-b3a7-4e07-a176-a690eec98e40","Type":"ContainerDied","Data":"eedf43a31728d687a5a8326336dfa3af538712ed0c4e70e9e145c2897334016d"} Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.110098 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.254270 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mwcr\" (UniqueName: \"kubernetes.io/projected/a24d26da-b3a7-4e07-a176-a690eec98e40-kube-api-access-4mwcr\") pod \"a24d26da-b3a7-4e07-a176-a690eec98e40\" (UID: \"a24d26da-b3a7-4e07-a176-a690eec98e40\") " Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.263672 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24d26da-b3a7-4e07-a176-a690eec98e40-kube-api-access-4mwcr" (OuterVolumeSpecName: "kube-api-access-4mwcr") pod "a24d26da-b3a7-4e07-a176-a690eec98e40" (UID: "a24d26da-b3a7-4e07-a176-a690eec98e40"). InnerVolumeSpecName "kube-api-access-4mwcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.356153 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mwcr\" (UniqueName: \"kubernetes.io/projected/a24d26da-b3a7-4e07-a176-a690eec98e40-kube-api-access-4mwcr\") on node \"crc\" DevicePath \"\"" Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.762014 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551370-m8phc" event={"ID":"a24d26da-b3a7-4e07-a176-a690eec98e40","Type":"ContainerDied","Data":"af929173a6e54bfdf0823b5551f2cace5321ec46c393f7fe279f3d865b1df964"} Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.762053 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af929173a6e54bfdf0823b5551f2cace5321ec46c393f7fe279f3d865b1df964" Mar 09 18:50:05 crc kubenswrapper[4821]: I0309 18:50:05.762070 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551370-m8phc" Mar 09 18:50:06 crc kubenswrapper[4821]: I0309 18:50:06.180178 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551364-fpxwg"] Mar 09 18:50:06 crc kubenswrapper[4821]: I0309 18:50:06.187151 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551364-fpxwg"] Mar 09 18:50:07 crc kubenswrapper[4821]: I0309 18:50:07.562466 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16076fc5-6d60-45a7-a6f7-0110fa46bfa9" path="/var/lib/kubelet/pods/16076fc5-6d60-45a7-a6f7-0110fa46bfa9/volumes" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.493531 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tfgw6"] Mar 09 18:50:21 crc kubenswrapper[4821]: E0309 18:50:21.494454 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24d26da-b3a7-4e07-a176-a690eec98e40" containerName="oc" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.494473 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24d26da-b3a7-4e07-a176-a690eec98e40" containerName="oc" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.494624 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24d26da-b3a7-4e07-a176-a690eec98e40" containerName="oc" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.495790 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.506470 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfgw6"] Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.647442 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-catalog-content\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.648806 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-utilities\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.648866 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwrt\" (UniqueName: \"kubernetes.io/projected/ad530868-cab3-4c31-b91f-0f418d2e912f-kube-api-access-ccwrt\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.749849 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-utilities\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.749918 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccwrt\" (UniqueName: \"kubernetes.io/projected/ad530868-cab3-4c31-b91f-0f418d2e912f-kube-api-access-ccwrt\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.749985 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-catalog-content\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.750510 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-utilities\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.750535 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-catalog-content\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.776372 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccwrt\" (UniqueName: \"kubernetes.io/projected/ad530868-cab3-4c31-b91f-0f418d2e912f-kube-api-access-ccwrt\") pod \"redhat-marketplace-tfgw6\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:21 crc kubenswrapper[4821]: I0309 18:50:21.814522 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:22 crc kubenswrapper[4821]: I0309 18:50:22.307761 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfgw6"] Mar 09 18:50:22 crc kubenswrapper[4821]: W0309 18:50:22.318309 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad530868_cab3_4c31_b91f_0f418d2e912f.slice/crio-b9b936847221f381cc5b9f74dff346071f9f368aebd9761970fd4b6d166e1e16 WatchSource:0}: Error finding container b9b936847221f381cc5b9f74dff346071f9f368aebd9761970fd4b6d166e1e16: Status 404 returned error can't find the container with id b9b936847221f381cc5b9f74dff346071f9f368aebd9761970fd4b6d166e1e16 Mar 09 18:50:22 crc kubenswrapper[4821]: I0309 18:50:22.906109 4821 generic.go:334] "Generic (PLEG): container finished" podID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerID="3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500" exitCode=0 Mar 09 18:50:22 crc kubenswrapper[4821]: I0309 18:50:22.906160 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerDied","Data":"3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500"} Mar 09 18:50:22 crc kubenswrapper[4821]: I0309 18:50:22.906422 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerStarted","Data":"b9b936847221f381cc5b9f74dff346071f9f368aebd9761970fd4b6d166e1e16"} Mar 09 18:50:23 crc kubenswrapper[4821]: I0309 18:50:23.916178 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerStarted","Data":"ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c"} Mar 09 18:50:24 crc kubenswrapper[4821]: I0309 18:50:24.926103 4821 generic.go:334] "Generic (PLEG): container finished" podID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerID="ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c" exitCode=0 Mar 09 18:50:24 crc kubenswrapper[4821]: I0309 18:50:24.926160 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerDied","Data":"ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c"} Mar 09 18:50:25 crc kubenswrapper[4821]: I0309 18:50:25.938391 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerStarted","Data":"dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f"} Mar 09 18:50:25 crc kubenswrapper[4821]: I0309 18:50:25.983161 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tfgw6" podStartSLOduration=2.477543057 podStartE2EDuration="4.983134888s" podCreationTimestamp="2026-03-09 18:50:21 +0000 UTC" firstStartedPulling="2026-03-09 18:50:22.908237907 +0000 UTC m=+1560.069613763" lastFinishedPulling="2026-03-09 18:50:25.413829738 +0000 UTC m=+1562.575205594" observedRunningTime="2026-03-09 18:50:25.959423384 +0000 UTC m=+1563.120799260" watchObservedRunningTime="2026-03-09 18:50:25.983134888 +0000 UTC m=+1563.144510754" Mar 09 18:50:29 crc kubenswrapper[4821]: I0309 18:50:29.914117 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:50:29 crc kubenswrapper[4821]: I0309 18:50:29.914751 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:50:31 crc kubenswrapper[4821]: I0309 18:50:31.815387 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:31 crc kubenswrapper[4821]: I0309 18:50:31.815560 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:31 crc kubenswrapper[4821]: I0309 18:50:31.863429 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:32 crc kubenswrapper[4821]: I0309 18:50:32.038312 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:35 crc kubenswrapper[4821]: I0309 18:50:35.476607 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfgw6"] Mar 09 18:50:35 crc kubenswrapper[4821]: I0309 18:50:35.477061 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tfgw6" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="registry-server" containerID="cri-o://dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f" gracePeriod=2 Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.000237 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.022490 4821 generic.go:334] "Generic (PLEG): container finished" podID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerID="dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f" exitCode=0 Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.022538 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerDied","Data":"dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f"} Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.022568 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfgw6" event={"ID":"ad530868-cab3-4c31-b91f-0f418d2e912f","Type":"ContainerDied","Data":"b9b936847221f381cc5b9f74dff346071f9f368aebd9761970fd4b6d166e1e16"} Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.022588 4821 scope.go:117] "RemoveContainer" containerID="dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.022725 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfgw6" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.056242 4821 scope.go:117] "RemoveContainer" containerID="ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.085987 4821 scope.go:117] "RemoveContainer" containerID="3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.105528 4821 scope.go:117] "RemoveContainer" containerID="dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f" Mar 09 18:50:36 crc kubenswrapper[4821]: E0309 18:50:36.106542 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f\": container with ID starting with dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f not found: ID does not exist" containerID="dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.106586 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f"} err="failed to get container status \"dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f\": rpc error: code = NotFound desc = could not find container \"dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f\": container with ID starting with dcaf67a03074fd7f5e0eb80eb107aacc7a4d83b48926381190912f018c68113f not found: ID does not exist" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.106612 4821 scope.go:117] "RemoveContainer" containerID="ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c" Mar 09 18:50:36 crc kubenswrapper[4821]: E0309 18:50:36.106959 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c\": container with ID starting with ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c not found: ID does not exist" containerID="ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.106999 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c"} err="failed to get container status \"ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c\": rpc error: code = NotFound desc = could not find container \"ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c\": container with ID starting with ebd6ae50c9fac16856fa03621b567bd70679d140b30500aa0944fb4bd376710c not found: ID does not exist" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.107024 4821 scope.go:117] "RemoveContainer" containerID="3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500" Mar 09 18:50:36 crc kubenswrapper[4821]: E0309 18:50:36.107306 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500\": container with ID starting with 3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500 not found: ID does not exist" containerID="3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.107346 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500"} err="failed to get container status \"3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500\": rpc error: code = NotFound desc = could not find container \"3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500\": container with ID starting with 3b42ff71e10480206c4e09d8c3c927e89e5d08c3ebc96e6556f3cee8127c8500 not found: ID does not exist" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.198055 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccwrt\" (UniqueName: \"kubernetes.io/projected/ad530868-cab3-4c31-b91f-0f418d2e912f-kube-api-access-ccwrt\") pod \"ad530868-cab3-4c31-b91f-0f418d2e912f\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.198206 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-utilities\") pod \"ad530868-cab3-4c31-b91f-0f418d2e912f\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.198244 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-catalog-content\") pod \"ad530868-cab3-4c31-b91f-0f418d2e912f\" (UID: \"ad530868-cab3-4c31-b91f-0f418d2e912f\") " Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.199149 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-utilities" (OuterVolumeSpecName: "utilities") pod "ad530868-cab3-4c31-b91f-0f418d2e912f" (UID: "ad530868-cab3-4c31-b91f-0f418d2e912f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.204548 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad530868-cab3-4c31-b91f-0f418d2e912f-kube-api-access-ccwrt" (OuterVolumeSpecName: "kube-api-access-ccwrt") pod "ad530868-cab3-4c31-b91f-0f418d2e912f" (UID: "ad530868-cab3-4c31-b91f-0f418d2e912f"). InnerVolumeSpecName "kube-api-access-ccwrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.228060 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad530868-cab3-4c31-b91f-0f418d2e912f" (UID: "ad530868-cab3-4c31-b91f-0f418d2e912f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.300067 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccwrt\" (UniqueName: \"kubernetes.io/projected/ad530868-cab3-4c31-b91f-0f418d2e912f-kube-api-access-ccwrt\") on node \"crc\" DevicePath \"\"" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.300100 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.300109 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad530868-cab3-4c31-b91f-0f418d2e912f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.353080 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfgw6"] Mar 09 18:50:36 crc kubenswrapper[4821]: I0309 18:50:36.359532 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfgw6"] Mar 09 18:50:37 crc kubenswrapper[4821]: I0309 18:50:37.562536 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" path="/var/lib/kubelet/pods/ad530868-cab3-4c31-b91f-0f418d2e912f/volumes" Mar 09 18:50:46 crc kubenswrapper[4821]: I0309 18:50:46.588812 4821 scope.go:117] "RemoveContainer" containerID="a492dbc866ea3fd3ea0a7835c8cb32f7ec6ca8c6d2f59278edecc78c6abcdbdd" Mar 09 18:50:46 crc kubenswrapper[4821]: I0309 18:50:46.646982 4821 scope.go:117] "RemoveContainer" containerID="23a82451109136fa823272bdf003f710ca00199325843e11801d679ed0fb5eb0" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.702184 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-747tq"] Mar 09 18:50:55 crc kubenswrapper[4821]: E0309 18:50:55.705499 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="registry-server" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.705532 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="registry-server" Mar 09 18:50:55 crc kubenswrapper[4821]: E0309 18:50:55.705574 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="extract-utilities" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.705594 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="extract-utilities" Mar 09 18:50:55 crc kubenswrapper[4821]: E0309 18:50:55.705637 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="extract-content" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.705652 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="extract-content" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.705999 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad530868-cab3-4c31-b91f-0f418d2e912f" containerName="registry-server" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.708557 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.715224 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-747tq"] Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.818288 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-utilities\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.818595 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh8j\" (UniqueName: \"kubernetes.io/projected/c803e019-4a48-4d8f-8a12-f5407bbd4644-kube-api-access-4sh8j\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.818741 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-catalog-content\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.920192 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-utilities\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.920274 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sh8j\" (UniqueName: \"kubernetes.io/projected/c803e019-4a48-4d8f-8a12-f5407bbd4644-kube-api-access-4sh8j\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.920369 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-catalog-content\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.920798 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-catalog-content\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.921000 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-utilities\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:55 crc kubenswrapper[4821]: I0309 18:50:55.941244 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sh8j\" (UniqueName: \"kubernetes.io/projected/c803e019-4a48-4d8f-8a12-f5407bbd4644-kube-api-access-4sh8j\") pod \"certified-operators-747tq\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:56 crc kubenswrapper[4821]: I0309 18:50:56.046706 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:50:56 crc kubenswrapper[4821]: I0309 18:50:56.539103 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-747tq"] Mar 09 18:50:57 crc kubenswrapper[4821]: I0309 18:50:57.218775 4821 generic.go:334] "Generic (PLEG): container finished" podID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerID="f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a" exitCode=0 Mar 09 18:50:57 crc kubenswrapper[4821]: I0309 18:50:57.218894 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-747tq" event={"ID":"c803e019-4a48-4d8f-8a12-f5407bbd4644","Type":"ContainerDied","Data":"f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a"} Mar 09 18:50:57 crc kubenswrapper[4821]: I0309 18:50:57.219093 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-747tq" event={"ID":"c803e019-4a48-4d8f-8a12-f5407bbd4644","Type":"ContainerStarted","Data":"4ee97e18b863ffa589c239f36fc700d68a70bf2b4f7a1f0031ede92764b5bb42"} Mar 09 18:50:58 crc kubenswrapper[4821]: E0309 18:50:58.709306 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc803e019_4a48_4d8f_8a12_f5407bbd4644.slice/crio-6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37.scope\": RecentStats: unable to find data in memory cache]" Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.242480 4821 generic.go:334] "Generic (PLEG): container finished" podID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerID="6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37" exitCode=0 Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.242529 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-747tq" event={"ID":"c803e019-4a48-4d8f-8a12-f5407bbd4644","Type":"ContainerDied","Data":"6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37"} Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.914041 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.914366 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.914406 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.915007 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5a172990046c01ffb98709622d17f70e9aa1b883bb21cac9e356cbc1c725a0e"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:50:59 crc kubenswrapper[4821]: I0309 18:50:59.915054 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://f5a172990046c01ffb98709622d17f70e9aa1b883bb21cac9e356cbc1c725a0e" gracePeriod=600 Mar 09 18:51:00 crc kubenswrapper[4821]: I0309 18:51:00.251156 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="f5a172990046c01ffb98709622d17f70e9aa1b883bb21cac9e356cbc1c725a0e" exitCode=0 Mar 09 18:51:00 crc kubenswrapper[4821]: I0309 18:51:00.251311 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"f5a172990046c01ffb98709622d17f70e9aa1b883bb21cac9e356cbc1c725a0e"} Mar 09 18:51:00 crc kubenswrapper[4821]: I0309 18:51:00.251486 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c"} Mar 09 18:51:00 crc kubenswrapper[4821]: I0309 18:51:00.251510 4821 scope.go:117] "RemoveContainer" containerID="f7abf213239bc2e4cfdfaa92f1f80f4d716d029c1e408736f60b7c4d3d559952" Mar 09 18:51:00 crc kubenswrapper[4821]: I0309 18:51:00.255448 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-747tq" event={"ID":"c803e019-4a48-4d8f-8a12-f5407bbd4644","Type":"ContainerStarted","Data":"a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555"} Mar 09 18:51:00 crc kubenswrapper[4821]: I0309 18:51:00.291624 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-747tq" podStartSLOduration=2.845835137 podStartE2EDuration="5.291609255s" podCreationTimestamp="2026-03-09 18:50:55 +0000 UTC" firstStartedPulling="2026-03-09 18:50:57.220936 +0000 UTC m=+1594.382311876" lastFinishedPulling="2026-03-09 18:50:59.666710138 +0000 UTC m=+1596.828085994" observedRunningTime="2026-03-09 18:51:00.288149611 +0000 UTC m=+1597.449525467" watchObservedRunningTime="2026-03-09 18:51:00.291609255 +0000 UTC m=+1597.452985111" Mar 09 18:51:06 crc kubenswrapper[4821]: I0309 18:51:06.047420 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:51:06 crc kubenswrapper[4821]: I0309 18:51:06.047987 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:51:06 crc kubenswrapper[4821]: I0309 18:51:06.095397 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:51:06 crc kubenswrapper[4821]: I0309 18:51:06.394608 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:51:09 crc kubenswrapper[4821]: I0309 18:51:09.687723 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-747tq"] Mar 09 18:51:09 crc kubenswrapper[4821]: I0309 18:51:09.689959 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-747tq" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="registry-server" containerID="cri-o://a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555" gracePeriod=2 Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.118504 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.261148 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sh8j\" (UniqueName: \"kubernetes.io/projected/c803e019-4a48-4d8f-8a12-f5407bbd4644-kube-api-access-4sh8j\") pod \"c803e019-4a48-4d8f-8a12-f5407bbd4644\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.261275 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-catalog-content\") pod \"c803e019-4a48-4d8f-8a12-f5407bbd4644\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.266556 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-utilities\") pod \"c803e019-4a48-4d8f-8a12-f5407bbd4644\" (UID: \"c803e019-4a48-4d8f-8a12-f5407bbd4644\") " Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.268172 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-utilities" (OuterVolumeSpecName: "utilities") pod "c803e019-4a48-4d8f-8a12-f5407bbd4644" (UID: "c803e019-4a48-4d8f-8a12-f5407bbd4644"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.272558 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c803e019-4a48-4d8f-8a12-f5407bbd4644-kube-api-access-4sh8j" (OuterVolumeSpecName: "kube-api-access-4sh8j") pod "c803e019-4a48-4d8f-8a12-f5407bbd4644" (UID: "c803e019-4a48-4d8f-8a12-f5407bbd4644"). InnerVolumeSpecName "kube-api-access-4sh8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.326614 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c803e019-4a48-4d8f-8a12-f5407bbd4644" (UID: "c803e019-4a48-4d8f-8a12-f5407bbd4644"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.346922 4821 generic.go:334] "Generic (PLEG): container finished" podID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerID="a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555" exitCode=0 Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.346999 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-747tq" event={"ID":"c803e019-4a48-4d8f-8a12-f5407bbd4644","Type":"ContainerDied","Data":"a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555"} Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.347049 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-747tq" event={"ID":"c803e019-4a48-4d8f-8a12-f5407bbd4644","Type":"ContainerDied","Data":"4ee97e18b863ffa589c239f36fc700d68a70bf2b4f7a1f0031ede92764b5bb42"} Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.347074 4821 scope.go:117] "RemoveContainer" containerID="a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.347294 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-747tq" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.371535 4821 scope.go:117] "RemoveContainer" containerID="6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.373868 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sh8j\" (UniqueName: \"kubernetes.io/projected/c803e019-4a48-4d8f-8a12-f5407bbd4644-kube-api-access-4sh8j\") on node \"crc\" DevicePath \"\"" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.373916 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.373928 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c803e019-4a48-4d8f-8a12-f5407bbd4644-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.401986 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-747tq"] Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.409619 4821 scope.go:117] "RemoveContainer" containerID="f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.413601 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-747tq"] Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.442401 4821 scope.go:117] "RemoveContainer" containerID="a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555" Mar 09 18:51:10 crc kubenswrapper[4821]: E0309 18:51:10.442852 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555\": container with ID starting with a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555 not found: ID does not exist" containerID="a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.442893 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555"} err="failed to get container status \"a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555\": rpc error: code = NotFound desc = could not find container \"a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555\": container with ID starting with a9b1219973c5f333562da16455087b5d0399a066e6a6adf709072ca71f4d4555 not found: ID does not exist" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.442922 4821 scope.go:117] "RemoveContainer" containerID="6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37" Mar 09 18:51:10 crc kubenswrapper[4821]: E0309 18:51:10.443226 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37\": container with ID starting with 6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37 not found: ID does not exist" containerID="6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.443249 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37"} err="failed to get container status \"6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37\": rpc error: code = NotFound desc = could not find container \"6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37\": container with ID starting with 6be628b2a7ff8801cc423a53f6f6e9c34a3c6119160b40a5087034963c53fe37 not found: ID does not exist" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.443262 4821 scope.go:117] "RemoveContainer" containerID="f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a" Mar 09 18:51:10 crc kubenswrapper[4821]: E0309 18:51:10.443701 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a\": container with ID starting with f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a not found: ID does not exist" containerID="f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a" Mar 09 18:51:10 crc kubenswrapper[4821]: I0309 18:51:10.443722 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a"} err="failed to get container status \"f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a\": rpc error: code = NotFound desc = could not find container \"f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a\": container with ID starting with f3a54bcab5006b6baeec5e1dbcc0553161624e1c3b54a2198eab1e145b5e966a not found: ID does not exist" Mar 09 18:51:11 crc kubenswrapper[4821]: I0309 18:51:11.560493 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" path="/var/lib/kubelet/pods/c803e019-4a48-4d8f-8a12-f5407bbd4644/volumes" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.161243 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551372-wwtqb"] Mar 09 18:52:00 crc kubenswrapper[4821]: E0309 18:52:00.162104 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="extract-content" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.162119 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="extract-content" Mar 09 18:52:00 crc kubenswrapper[4821]: E0309 18:52:00.162144 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="extract-utilities" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.162153 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="extract-utilities" Mar 09 18:52:00 crc kubenswrapper[4821]: E0309 18:52:00.162170 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="registry-server" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.162180 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="registry-server" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.162384 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c803e019-4a48-4d8f-8a12-f5407bbd4644" containerName="registry-server" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.163056 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.166810 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.167111 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.167282 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.170233 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551372-wwtqb"] Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.317343 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwc27\" (UniqueName: \"kubernetes.io/projected/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad-kube-api-access-qwc27\") pod \"auto-csr-approver-29551372-wwtqb\" (UID: \"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad\") " pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.419393 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwc27\" (UniqueName: \"kubernetes.io/projected/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad-kube-api-access-qwc27\") pod \"auto-csr-approver-29551372-wwtqb\" (UID: \"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad\") " pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.456256 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwc27\" (UniqueName: \"kubernetes.io/projected/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad-kube-api-access-qwc27\") pod \"auto-csr-approver-29551372-wwtqb\" (UID: \"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad\") " pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.487125 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:00 crc kubenswrapper[4821]: I0309 18:52:00.972310 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551372-wwtqb"] Mar 09 18:52:00 crc kubenswrapper[4821]: W0309 18:52:00.982994 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc32f0746_9ec8_499e_bbf4_6e4e6d72f9ad.slice/crio-dad918b8685880267223f92555735670e1026f1bfee7babfca830ae92b9dc7db WatchSource:0}: Error finding container dad918b8685880267223f92555735670e1026f1bfee7babfca830ae92b9dc7db: Status 404 returned error can't find the container with id dad918b8685880267223f92555735670e1026f1bfee7babfca830ae92b9dc7db Mar 09 18:52:01 crc kubenswrapper[4821]: I0309 18:52:01.785493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" event={"ID":"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad","Type":"ContainerStarted","Data":"dad918b8685880267223f92555735670e1026f1bfee7babfca830ae92b9dc7db"} Mar 09 18:52:02 crc kubenswrapper[4821]: I0309 18:52:02.795236 4821 generic.go:334] "Generic (PLEG): container finished" podID="c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad" containerID="930ca14184f97667d12dfb38c65348466252e7cb0ca165bb692664ac61ff4b0e" exitCode=0 Mar 09 18:52:02 crc kubenswrapper[4821]: I0309 18:52:02.795330 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" event={"ID":"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad","Type":"ContainerDied","Data":"930ca14184f97667d12dfb38c65348466252e7cb0ca165bb692664ac61ff4b0e"} Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.143339 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.279686 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwc27\" (UniqueName: \"kubernetes.io/projected/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad-kube-api-access-qwc27\") pod \"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad\" (UID: \"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad\") " Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.285860 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad-kube-api-access-qwc27" (OuterVolumeSpecName: "kube-api-access-qwc27") pod "c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad" (UID: "c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad"). InnerVolumeSpecName "kube-api-access-qwc27". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.382255 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwc27\" (UniqueName: \"kubernetes.io/projected/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad-kube-api-access-qwc27\") on node \"crc\" DevicePath \"\"" Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.812239 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" event={"ID":"c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad","Type":"ContainerDied","Data":"dad918b8685880267223f92555735670e1026f1bfee7babfca830ae92b9dc7db"} Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.812274 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad918b8685880267223f92555735670e1026f1bfee7babfca830ae92b9dc7db" Mar 09 18:52:04 crc kubenswrapper[4821]: I0309 18:52:04.812284 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551372-wwtqb" Mar 09 18:52:05 crc kubenswrapper[4821]: I0309 18:52:05.228169 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551366-x9zpn"] Mar 09 18:52:05 crc kubenswrapper[4821]: I0309 18:52:05.235132 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551366-x9zpn"] Mar 09 18:52:05 crc kubenswrapper[4821]: I0309 18:52:05.579086 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3153d57a-d24a-493f-bd16-6b9761c2b41f" path="/var/lib/kubelet/pods/3153d57a-d24a-493f-bd16-6b9761c2b41f/volumes" Mar 09 18:52:46 crc kubenswrapper[4821]: I0309 18:52:46.795875 4821 scope.go:117] "RemoveContainer" containerID="954f111b9ce725bd85b3e557135c2d410c1245cc35abe8a484e6a366abcebd65" Mar 09 18:53:12 crc kubenswrapper[4821]: I0309 18:53:12.055547 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-vtbsg"] Mar 09 18:53:12 crc kubenswrapper[4821]: I0309 18:53:12.063096 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-h56pf"] Mar 09 18:53:12 crc kubenswrapper[4821]: I0309 18:53:12.070646 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq"] Mar 09 18:53:12 crc kubenswrapper[4821]: I0309 18:53:12.078995 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-6d8a-account-create-update-vgzbq"] Mar 09 18:53:12 crc kubenswrapper[4821]: I0309 18:53:12.085426 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-vtbsg"] Mar 09 18:53:12 crc kubenswrapper[4821]: I0309 18:53:12.092480 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-h56pf"] Mar 09 18:53:13 crc kubenswrapper[4821]: I0309 18:53:13.568297 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36dce1ce-dd03-42be-a792-5e198c405b1b" path="/var/lib/kubelet/pods/36dce1ce-dd03-42be-a792-5e198c405b1b/volumes" Mar 09 18:53:13 crc kubenswrapper[4821]: I0309 18:53:13.569494 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cd3f65f-04e8-4e03-916b-9fa01bed65f5" path="/var/lib/kubelet/pods/3cd3f65f-04e8-4e03-916b-9fa01bed65f5/volumes" Mar 09 18:53:13 crc kubenswrapper[4821]: I0309 18:53:13.570699 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ac7235a-20f5-458c-9d93-e7221cd8b83f" path="/var/lib/kubelet/pods/5ac7235a-20f5-458c-9d93-e7221cd8b83f/volumes" Mar 09 18:53:29 crc kubenswrapper[4821]: I0309 18:53:29.913432 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:53:29 crc kubenswrapper[4821]: I0309 18:53:29.914029 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:53:46 crc kubenswrapper[4821]: I0309 18:53:46.862164 4821 scope.go:117] "RemoveContainer" containerID="a3095a86b96a71356ce2784b435599b07345066b982169670a5231fd6c82dea2" Mar 09 18:53:46 crc kubenswrapper[4821]: I0309 18:53:46.888578 4821 scope.go:117] "RemoveContainer" containerID="1bc991cbb462326a9bcd11d53fd2157a64cd2acfd2aa68d90c073c9897d74650" Mar 09 18:53:46 crc kubenswrapper[4821]: I0309 18:53:46.927985 4821 scope.go:117] "RemoveContainer" containerID="cab68cb26a7cfe543b61c64f1c12db11095115069ae7e5bdd48b1d602b6ab924" Mar 09 18:53:51 crc kubenswrapper[4821]: I0309 18:53:51.042008 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-zxc2d"] Mar 09 18:53:51 crc kubenswrapper[4821]: I0309 18:53:51.049068 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-zxc2d"] Mar 09 18:53:51 crc kubenswrapper[4821]: I0309 18:53:51.562243 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e8000e-e15d-4b86-8a92-9c35d297c60b" path="/var/lib/kubelet/pods/a4e8000e-e15d-4b86-8a92-9c35d297c60b/volumes" Mar 09 18:53:59 crc kubenswrapper[4821]: I0309 18:53:59.913581 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:53:59 crc kubenswrapper[4821]: I0309 18:53:59.914164 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.167493 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551374-8r7mx"] Mar 09 18:54:00 crc kubenswrapper[4821]: E0309 18:54:00.168034 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad" containerName="oc" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.168065 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad" containerName="oc" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.168367 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad" containerName="oc" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.169087 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.171263 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.172085 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.172833 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.182985 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551374-8r7mx"] Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.239684 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf6qv\" (UniqueName: \"kubernetes.io/projected/b738aef6-3e88-43f4-a093-a25a2062eb56-kube-api-access-gf6qv\") pod \"auto-csr-approver-29551374-8r7mx\" (UID: \"b738aef6-3e88-43f4-a093-a25a2062eb56\") " pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.341246 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf6qv\" (UniqueName: \"kubernetes.io/projected/b738aef6-3e88-43f4-a093-a25a2062eb56-kube-api-access-gf6qv\") pod \"auto-csr-approver-29551374-8r7mx\" (UID: \"b738aef6-3e88-43f4-a093-a25a2062eb56\") " pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.364776 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf6qv\" (UniqueName: \"kubernetes.io/projected/b738aef6-3e88-43f4-a093-a25a2062eb56-kube-api-access-gf6qv\") pod \"auto-csr-approver-29551374-8r7mx\" (UID: \"b738aef6-3e88-43f4-a093-a25a2062eb56\") " pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.499054 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:00 crc kubenswrapper[4821]: I0309 18:54:00.975777 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551374-8r7mx"] Mar 09 18:54:01 crc kubenswrapper[4821]: I0309 18:54:01.828616 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" event={"ID":"b738aef6-3e88-43f4-a093-a25a2062eb56","Type":"ContainerStarted","Data":"9c49f9ff30891cfaba3408f2fe125314e84bc5f0886a44b99f7cdeb394fee3a5"} Mar 09 18:54:02 crc kubenswrapper[4821]: I0309 18:54:02.837264 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" event={"ID":"b738aef6-3e88-43f4-a093-a25a2062eb56","Type":"ContainerStarted","Data":"668d14e1ea7d12452c64abee05352a18d88c0923d97f595bf0e029888cb58bb6"} Mar 09 18:54:02 crc kubenswrapper[4821]: I0309 18:54:02.858916 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" podStartSLOduration=1.644931557 podStartE2EDuration="2.858897387s" podCreationTimestamp="2026-03-09 18:54:00 +0000 UTC" firstStartedPulling="2026-03-09 18:54:00.987894343 +0000 UTC m=+1778.149270189" lastFinishedPulling="2026-03-09 18:54:02.201860163 +0000 UTC m=+1779.363236019" observedRunningTime="2026-03-09 18:54:02.855742942 +0000 UTC m=+1780.017118818" watchObservedRunningTime="2026-03-09 18:54:02.858897387 +0000 UTC m=+1780.020273243" Mar 09 18:54:03 crc kubenswrapper[4821]: I0309 18:54:03.850787 4821 generic.go:334] "Generic (PLEG): container finished" podID="b738aef6-3e88-43f4-a093-a25a2062eb56" containerID="668d14e1ea7d12452c64abee05352a18d88c0923d97f595bf0e029888cb58bb6" exitCode=0 Mar 09 18:54:03 crc kubenswrapper[4821]: I0309 18:54:03.851521 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" event={"ID":"b738aef6-3e88-43f4-a093-a25a2062eb56","Type":"ContainerDied","Data":"668d14e1ea7d12452c64abee05352a18d88c0923d97f595bf0e029888cb58bb6"} Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.200655 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.323560 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf6qv\" (UniqueName: \"kubernetes.io/projected/b738aef6-3e88-43f4-a093-a25a2062eb56-kube-api-access-gf6qv\") pod \"b738aef6-3e88-43f4-a093-a25a2062eb56\" (UID: \"b738aef6-3e88-43f4-a093-a25a2062eb56\") " Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.328168 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b738aef6-3e88-43f4-a093-a25a2062eb56-kube-api-access-gf6qv" (OuterVolumeSpecName: "kube-api-access-gf6qv") pod "b738aef6-3e88-43f4-a093-a25a2062eb56" (UID: "b738aef6-3e88-43f4-a093-a25a2062eb56"). InnerVolumeSpecName "kube-api-access-gf6qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.426412 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf6qv\" (UniqueName: \"kubernetes.io/projected/b738aef6-3e88-43f4-a093-a25a2062eb56-kube-api-access-gf6qv\") on node \"crc\" DevicePath \"\"" Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.869300 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" event={"ID":"b738aef6-3e88-43f4-a093-a25a2062eb56","Type":"ContainerDied","Data":"9c49f9ff30891cfaba3408f2fe125314e84bc5f0886a44b99f7cdeb394fee3a5"} Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.869371 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c49f9ff30891cfaba3408f2fe125314e84bc5f0886a44b99f7cdeb394fee3a5" Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.869344 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551374-8r7mx" Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.942336 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551368-j84z2"] Mar 09 18:54:05 crc kubenswrapper[4821]: I0309 18:54:05.948785 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551368-j84z2"] Mar 09 18:54:06 crc kubenswrapper[4821]: I0309 18:54:06.031717 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vmfkf"] Mar 09 18:54:06 crc kubenswrapper[4821]: I0309 18:54:06.045137 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-vmfkf"] Mar 09 18:54:07 crc kubenswrapper[4821]: I0309 18:54:07.560737 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683bdf8d-e740-47ae-92b0-cf247536c80d" path="/var/lib/kubelet/pods/683bdf8d-e740-47ae-92b0-cf247536c80d/volumes" Mar 09 18:54:07 crc kubenswrapper[4821]: I0309 18:54:07.561385 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b095c716-ece8-4ee9-af0e-6f9778764b02" path="/var/lib/kubelet/pods/b095c716-ece8-4ee9-af0e-6f9778764b02/volumes" Mar 09 18:54:29 crc kubenswrapper[4821]: I0309 18:54:29.914032 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 18:54:29 crc kubenswrapper[4821]: I0309 18:54:29.915439 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 18:54:29 crc kubenswrapper[4821]: I0309 18:54:29.915489 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 18:54:29 crc kubenswrapper[4821]: I0309 18:54:29.916126 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 18:54:29 crc kubenswrapper[4821]: I0309 18:54:29.916177 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" gracePeriod=600 Mar 09 18:54:30 crc kubenswrapper[4821]: I0309 18:54:30.076706 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" exitCode=0 Mar 09 18:54:30 crc kubenswrapper[4821]: I0309 18:54:30.076743 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c"} Mar 09 18:54:30 crc kubenswrapper[4821]: I0309 18:54:30.076854 4821 scope.go:117] "RemoveContainer" containerID="f5a172990046c01ffb98709622d17f70e9aa1b883bb21cac9e356cbc1c725a0e" Mar 09 18:54:30 crc kubenswrapper[4821]: E0309 18:54:30.079875 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:54:31 crc kubenswrapper[4821]: I0309 18:54:31.089226 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:54:31 crc kubenswrapper[4821]: E0309 18:54:31.089680 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.304030 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2nk6c"] Mar 09 18:54:45 crc kubenswrapper[4821]: E0309 18:54:45.305538 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b738aef6-3e88-43f4-a093-a25a2062eb56" containerName="oc" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.305563 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b738aef6-3e88-43f4-a093-a25a2062eb56" containerName="oc" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.305926 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b738aef6-3e88-43f4-a093-a25a2062eb56" containerName="oc" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.308119 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.328067 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nk6c"] Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.425438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gcm\" (UniqueName: \"kubernetes.io/projected/f19c92fb-fc2c-4040-a053-1a29637de695-kube-api-access-96gcm\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.425489 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-catalog-content\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.425514 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-utilities\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.526992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gcm\" (UniqueName: \"kubernetes.io/projected/f19c92fb-fc2c-4040-a053-1a29637de695-kube-api-access-96gcm\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.527051 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-catalog-content\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.527081 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-utilities\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.527579 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-catalog-content\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.527604 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-utilities\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.550706 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gcm\" (UniqueName: \"kubernetes.io/projected/f19c92fb-fc2c-4040-a053-1a29637de695-kube-api-access-96gcm\") pod \"community-operators-2nk6c\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.551833 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:54:45 crc kubenswrapper[4821]: E0309 18:54:45.552211 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:54:45 crc kubenswrapper[4821]: I0309 18:54:45.635830 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:46 crc kubenswrapper[4821]: I0309 18:54:46.192426 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nk6c"] Mar 09 18:54:46 crc kubenswrapper[4821]: I0309 18:54:46.230768 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerStarted","Data":"4836e2e408453c3d32007248e0fe54f99ba5a70065277bbdb89551a76394a805"} Mar 09 18:54:47 crc kubenswrapper[4821]: I0309 18:54:47.011102 4821 scope.go:117] "RemoveContainer" containerID="9291ce03bff390325369731633ef6444fa49b6adbba43dd70cd13ecfa905a257" Mar 09 18:54:47 crc kubenswrapper[4821]: I0309 18:54:47.051967 4821 scope.go:117] "RemoveContainer" containerID="d26854201afe367d5084025dc07c8b58b6ef54b7cc6f9187bee5b482c8320949" Mar 09 18:54:47 crc kubenswrapper[4821]: I0309 18:54:47.091781 4821 scope.go:117] "RemoveContainer" containerID="50dc4a4d31a7953caf05ae63a28d63d96c3cd8fc307165ba7aa0e31fe872643a" Mar 09 18:54:47 crc kubenswrapper[4821]: I0309 18:54:47.238906 4821 generic.go:334] "Generic (PLEG): container finished" podID="f19c92fb-fc2c-4040-a053-1a29637de695" containerID="24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a" exitCode=0 Mar 09 18:54:47 crc kubenswrapper[4821]: I0309 18:54:47.238964 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerDied","Data":"24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a"} Mar 09 18:54:48 crc kubenswrapper[4821]: I0309 18:54:48.250015 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerStarted","Data":"325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac"} Mar 09 18:54:49 crc kubenswrapper[4821]: I0309 18:54:49.263825 4821 generic.go:334] "Generic (PLEG): container finished" podID="f19c92fb-fc2c-4040-a053-1a29637de695" containerID="325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac" exitCode=0 Mar 09 18:54:49 crc kubenswrapper[4821]: I0309 18:54:49.263937 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerDied","Data":"325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac"} Mar 09 18:54:50 crc kubenswrapper[4821]: I0309 18:54:50.273610 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerStarted","Data":"a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b"} Mar 09 18:54:50 crc kubenswrapper[4821]: I0309 18:54:50.294917 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2nk6c" podStartSLOduration=2.830681984 podStartE2EDuration="5.294902068s" podCreationTimestamp="2026-03-09 18:54:45 +0000 UTC" firstStartedPulling="2026-03-09 18:54:47.240425343 +0000 UTC m=+1824.401801199" lastFinishedPulling="2026-03-09 18:54:49.704645427 +0000 UTC m=+1826.866021283" observedRunningTime="2026-03-09 18:54:50.29092154 +0000 UTC m=+1827.452297386" watchObservedRunningTime="2026-03-09 18:54:50.294902068 +0000 UTC m=+1827.456277924" Mar 09 18:54:55 crc kubenswrapper[4821]: I0309 18:54:55.636695 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:55 crc kubenswrapper[4821]: I0309 18:54:55.637408 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:55 crc kubenswrapper[4821]: I0309 18:54:55.697603 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:56 crc kubenswrapper[4821]: I0309 18:54:56.396680 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:54:59 crc kubenswrapper[4821]: I0309 18:54:59.551873 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:54:59 crc kubenswrapper[4821]: E0309 18:54:59.552597 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:54:59 crc kubenswrapper[4821]: I0309 18:54:59.685586 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nk6c"] Mar 09 18:54:59 crc kubenswrapper[4821]: I0309 18:54:59.685922 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2nk6c" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="registry-server" containerID="cri-o://a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b" gracePeriod=2 Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.216599 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.376196 4821 generic.go:334] "Generic (PLEG): container finished" podID="f19c92fb-fc2c-4040-a053-1a29637de695" containerID="a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b" exitCode=0 Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.376282 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerDied","Data":"a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b"} Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.376380 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nk6c" event={"ID":"f19c92fb-fc2c-4040-a053-1a29637de695","Type":"ContainerDied","Data":"4836e2e408453c3d32007248e0fe54f99ba5a70065277bbdb89551a76394a805"} Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.376404 4821 scope.go:117] "RemoveContainer" containerID="a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.376258 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nk6c" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.411509 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-catalog-content\") pod \"f19c92fb-fc2c-4040-a053-1a29637de695\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.411652 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96gcm\" (UniqueName: \"kubernetes.io/projected/f19c92fb-fc2c-4040-a053-1a29637de695-kube-api-access-96gcm\") pod \"f19c92fb-fc2c-4040-a053-1a29637de695\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.411781 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-utilities\") pod \"f19c92fb-fc2c-4040-a053-1a29637de695\" (UID: \"f19c92fb-fc2c-4040-a053-1a29637de695\") " Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.412709 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-utilities" (OuterVolumeSpecName: "utilities") pod "f19c92fb-fc2c-4040-a053-1a29637de695" (UID: "f19c92fb-fc2c-4040-a053-1a29637de695"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.416799 4821 scope.go:117] "RemoveContainer" containerID="325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.421415 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19c92fb-fc2c-4040-a053-1a29637de695-kube-api-access-96gcm" (OuterVolumeSpecName: "kube-api-access-96gcm") pod "f19c92fb-fc2c-4040-a053-1a29637de695" (UID: "f19c92fb-fc2c-4040-a053-1a29637de695"). InnerVolumeSpecName "kube-api-access-96gcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.472031 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f19c92fb-fc2c-4040-a053-1a29637de695" (UID: "f19c92fb-fc2c-4040-a053-1a29637de695"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.494514 4821 scope.go:117] "RemoveContainer" containerID="24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.513457 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.513486 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19c92fb-fc2c-4040-a053-1a29637de695-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.513496 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96gcm\" (UniqueName: \"kubernetes.io/projected/f19c92fb-fc2c-4040-a053-1a29637de695-kube-api-access-96gcm\") on node \"crc\" DevicePath \"\"" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.531857 4821 scope.go:117] "RemoveContainer" containerID="a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b" Mar 09 18:55:00 crc kubenswrapper[4821]: E0309 18:55:00.532349 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b\": container with ID starting with a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b not found: ID does not exist" containerID="a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.532388 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b"} err="failed to get container status \"a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b\": rpc error: code = NotFound desc = could not find container \"a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b\": container with ID starting with a5a9ab5a6f1e03f51b6922fff568c78a41048294228bbf2c8f397ca5b37a3c7b not found: ID does not exist" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.532456 4821 scope.go:117] "RemoveContainer" containerID="325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac" Mar 09 18:55:00 crc kubenswrapper[4821]: E0309 18:55:00.532840 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac\": container with ID starting with 325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac not found: ID does not exist" containerID="325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.532860 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac"} err="failed to get container status \"325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac\": rpc error: code = NotFound desc = could not find container \"325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac\": container with ID starting with 325e0d3b1d449667bb910cce24abb34a29206ba9a52069119511aebaca5b1eac not found: ID does not exist" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.532874 4821 scope.go:117] "RemoveContainer" containerID="24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a" Mar 09 18:55:00 crc kubenswrapper[4821]: E0309 18:55:00.533116 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a\": container with ID starting with 24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a not found: ID does not exist" containerID="24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.533134 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a"} err="failed to get container status \"24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a\": rpc error: code = NotFound desc = could not find container \"24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a\": container with ID starting with 24f17bea7f74c40f2a7126a5d18c5ac0cc34c1b71fea1b9463984059c2ff3d3a not found: ID does not exist" Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.736465 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nk6c"] Mar 09 18:55:00 crc kubenswrapper[4821]: I0309 18:55:00.765466 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2nk6c"] Mar 09 18:55:01 crc kubenswrapper[4821]: I0309 18:55:01.565149 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" path="/var/lib/kubelet/pods/f19c92fb-fc2c-4040-a053-1a29637de695/volumes" Mar 09 18:55:04 crc kubenswrapper[4821]: I0309 18:55:04.062752 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hdpct"] Mar 09 18:55:04 crc kubenswrapper[4821]: I0309 18:55:04.074608 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hdpct"] Mar 09 18:55:05 crc kubenswrapper[4821]: I0309 18:55:05.030612 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-13c5-account-create-update-8l44r"] Mar 09 18:55:05 crc kubenswrapper[4821]: I0309 18:55:05.037616 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-13c5-account-create-update-8l44r"] Mar 09 18:55:05 crc kubenswrapper[4821]: I0309 18:55:05.561888 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371d60af-a86d-4bc8-a4a4-e0e97b6620ad" path="/var/lib/kubelet/pods/371d60af-a86d-4bc8-a4a4-e0e97b6620ad/volumes" Mar 09 18:55:05 crc kubenswrapper[4821]: I0309 18:55:05.562491 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5693552-2476-4bcf-a972-e60391565adf" path="/var/lib/kubelet/pods/b5693552-2476-4bcf-a972-e60391565adf/volumes" Mar 09 18:55:14 crc kubenswrapper[4821]: I0309 18:55:14.551841 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:55:14 crc kubenswrapper[4821]: E0309 18:55:14.552652 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:55:29 crc kubenswrapper[4821]: I0309 18:55:29.551715 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:55:29 crc kubenswrapper[4821]: E0309 18:55:29.552729 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:55:41 crc kubenswrapper[4821]: I0309 18:55:41.551705 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:55:41 crc kubenswrapper[4821]: E0309 18:55:41.552693 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:55:45 crc kubenswrapper[4821]: I0309 18:55:45.128549 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-whc2t"] Mar 09 18:55:45 crc kubenswrapper[4821]: I0309 18:55:45.142991 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-whc2t"] Mar 09 18:55:45 crc kubenswrapper[4821]: I0309 18:55:45.560404 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785fc44b-c186-4374-8023-229ca8f897d1" path="/var/lib/kubelet/pods/785fc44b-c186-4374-8023-229ca8f897d1/volumes" Mar 09 18:55:47 crc kubenswrapper[4821]: I0309 18:55:47.168361 4821 scope.go:117] "RemoveContainer" containerID="87285f5f18854effb3df35aa17e969de672d6ba8399d5e89ad78339702f555f6" Mar 09 18:55:47 crc kubenswrapper[4821]: I0309 18:55:47.211256 4821 scope.go:117] "RemoveContainer" containerID="fb7fe433ccab648dc88048674a31b26f7330cc848c0c63044a54bece83339fa6" Mar 09 18:55:47 crc kubenswrapper[4821]: I0309 18:55:47.252750 4821 scope.go:117] "RemoveContainer" containerID="b1a2388f585301116925028424683876bec66ab44315d2e0630e6de88271437b" Mar 09 18:55:52 crc kubenswrapper[4821]: I0309 18:55:52.551928 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:55:52 crc kubenswrapper[4821]: E0309 18:55:52.552447 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.147856 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551376-zk95z"] Mar 09 18:56:00 crc kubenswrapper[4821]: E0309 18:56:00.149545 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="extract-utilities" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.149631 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="extract-utilities" Mar 09 18:56:00 crc kubenswrapper[4821]: E0309 18:56:00.149769 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="registry-server" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.149885 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="registry-server" Mar 09 18:56:00 crc kubenswrapper[4821]: E0309 18:56:00.149943 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="extract-content" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.150000 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="extract-content" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.150218 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19c92fb-fc2c-4040-a053-1a29637de695" containerName="registry-server" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.150866 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.153112 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.153231 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.153412 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.165035 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551376-zk95z"] Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.240967 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbdqn\" (UniqueName: \"kubernetes.io/projected/fa84d5e7-6e13-4c0b-b03e-7671041bfbad-kube-api-access-cbdqn\") pod \"auto-csr-approver-29551376-zk95z\" (UID: \"fa84d5e7-6e13-4c0b-b03e-7671041bfbad\") " pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.342798 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbdqn\" (UniqueName: \"kubernetes.io/projected/fa84d5e7-6e13-4c0b-b03e-7671041bfbad-kube-api-access-cbdqn\") pod \"auto-csr-approver-29551376-zk95z\" (UID: \"fa84d5e7-6e13-4c0b-b03e-7671041bfbad\") " pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.361842 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbdqn\" (UniqueName: \"kubernetes.io/projected/fa84d5e7-6e13-4c0b-b03e-7671041bfbad-kube-api-access-cbdqn\") pod \"auto-csr-approver-29551376-zk95z\" (UID: \"fa84d5e7-6e13-4c0b-b03e-7671041bfbad\") " pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.468175 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.934870 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551376-zk95z"] Mar 09 18:56:00 crc kubenswrapper[4821]: I0309 18:56:00.944099 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 18:56:01 crc kubenswrapper[4821]: I0309 18:56:01.900228 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551376-zk95z" event={"ID":"fa84d5e7-6e13-4c0b-b03e-7671041bfbad","Type":"ContainerStarted","Data":"801b48a38311a2145b8dbd8b03eabb4c295048577ad6d8831483dd7a207ab041"} Mar 09 18:56:02 crc kubenswrapper[4821]: I0309 18:56:02.910701 4821 generic.go:334] "Generic (PLEG): container finished" podID="fa84d5e7-6e13-4c0b-b03e-7671041bfbad" containerID="86d6ffb67b91499d7e98a6f5a064323b49e2a69542b7cff1ce38c9028a374ea5" exitCode=0 Mar 09 18:56:02 crc kubenswrapper[4821]: I0309 18:56:02.910793 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551376-zk95z" event={"ID":"fa84d5e7-6e13-4c0b-b03e-7671041bfbad","Type":"ContainerDied","Data":"86d6ffb67b91499d7e98a6f5a064323b49e2a69542b7cff1ce38c9028a374ea5"} Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.344875 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.425820 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbdqn\" (UniqueName: \"kubernetes.io/projected/fa84d5e7-6e13-4c0b-b03e-7671041bfbad-kube-api-access-cbdqn\") pod \"fa84d5e7-6e13-4c0b-b03e-7671041bfbad\" (UID: \"fa84d5e7-6e13-4c0b-b03e-7671041bfbad\") " Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.431411 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa84d5e7-6e13-4c0b-b03e-7671041bfbad-kube-api-access-cbdqn" (OuterVolumeSpecName: "kube-api-access-cbdqn") pod "fa84d5e7-6e13-4c0b-b03e-7671041bfbad" (UID: "fa84d5e7-6e13-4c0b-b03e-7671041bfbad"). InnerVolumeSpecName "kube-api-access-cbdqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.527159 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbdqn\" (UniqueName: \"kubernetes.io/projected/fa84d5e7-6e13-4c0b-b03e-7671041bfbad-kube-api-access-cbdqn\") on node \"crc\" DevicePath \"\"" Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.551812 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:56:04 crc kubenswrapper[4821]: E0309 18:56:04.552123 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.946455 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551376-zk95z" event={"ID":"fa84d5e7-6e13-4c0b-b03e-7671041bfbad","Type":"ContainerDied","Data":"801b48a38311a2145b8dbd8b03eabb4c295048577ad6d8831483dd7a207ab041"} Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.946506 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="801b48a38311a2145b8dbd8b03eabb4c295048577ad6d8831483dd7a207ab041" Mar 09 18:56:04 crc kubenswrapper[4821]: I0309 18:56:04.946584 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551376-zk95z" Mar 09 18:56:05 crc kubenswrapper[4821]: E0309 18:56:05.082714 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa84d5e7_6e13_4c0b_b03e_7671041bfbad.slice\": RecentStats: unable to find data in memory cache]" Mar 09 18:56:05 crc kubenswrapper[4821]: I0309 18:56:05.414832 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551370-m8phc"] Mar 09 18:56:05 crc kubenswrapper[4821]: I0309 18:56:05.422611 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551370-m8phc"] Mar 09 18:56:05 crc kubenswrapper[4821]: I0309 18:56:05.562639 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24d26da-b3a7-4e07-a176-a690eec98e40" path="/var/lib/kubelet/pods/a24d26da-b3a7-4e07-a176-a690eec98e40/volumes" Mar 09 18:56:16 crc kubenswrapper[4821]: I0309 18:56:16.551608 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:56:16 crc kubenswrapper[4821]: E0309 18:56:16.552449 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:56:28 crc kubenswrapper[4821]: I0309 18:56:28.551765 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:56:28 crc kubenswrapper[4821]: E0309 18:56:28.552576 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:56:43 crc kubenswrapper[4821]: I0309 18:56:43.559391 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:56:43 crc kubenswrapper[4821]: E0309 18:56:43.560215 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:56:47 crc kubenswrapper[4821]: I0309 18:56:47.339750 4821 scope.go:117] "RemoveContainer" containerID="eedf43a31728d687a5a8326336dfa3af538712ed0c4e70e9e145c2897334016d" Mar 09 18:56:54 crc kubenswrapper[4821]: I0309 18:56:54.551435 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:56:54 crc kubenswrapper[4821]: E0309 18:56:54.552570 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:57:05 crc kubenswrapper[4821]: I0309 18:57:05.551898 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:57:05 crc kubenswrapper[4821]: E0309 18:57:05.552539 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:57:18 crc kubenswrapper[4821]: I0309 18:57:18.552725 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:57:18 crc kubenswrapper[4821]: E0309 18:57:18.553634 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:57:31 crc kubenswrapper[4821]: I0309 18:57:31.551919 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:57:31 crc kubenswrapper[4821]: E0309 18:57:31.553249 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:57:44 crc kubenswrapper[4821]: I0309 18:57:44.551105 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:57:44 crc kubenswrapper[4821]: E0309 18:57:44.551921 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:57:58 crc kubenswrapper[4821]: I0309 18:57:58.551745 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:57:58 crc kubenswrapper[4821]: E0309 18:57:58.554368 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.156240 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551378-vhjcn"] Mar 09 18:58:00 crc kubenswrapper[4821]: E0309 18:58:00.156937 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa84d5e7-6e13-4c0b-b03e-7671041bfbad" containerName="oc" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.156954 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa84d5e7-6e13-4c0b-b03e-7671041bfbad" containerName="oc" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.157143 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa84d5e7-6e13-4c0b-b03e-7671041bfbad" containerName="oc" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.157828 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.160599 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.161500 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.161766 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.171364 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551378-vhjcn"] Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.318304 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wjs\" (UniqueName: \"kubernetes.io/projected/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf-kube-api-access-q4wjs\") pod \"auto-csr-approver-29551378-vhjcn\" (UID: \"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf\") " pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.419591 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4wjs\" (UniqueName: \"kubernetes.io/projected/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf-kube-api-access-q4wjs\") pod \"auto-csr-approver-29551378-vhjcn\" (UID: \"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf\") " pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.443845 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4wjs\" (UniqueName: \"kubernetes.io/projected/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf-kube-api-access-q4wjs\") pod \"auto-csr-approver-29551378-vhjcn\" (UID: \"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf\") " pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.481444 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:00 crc kubenswrapper[4821]: I0309 18:58:00.919152 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551378-vhjcn"] Mar 09 18:58:01 crc kubenswrapper[4821]: I0309 18:58:01.033248 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" event={"ID":"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf","Type":"ContainerStarted","Data":"2d6dcb77e6ccf7ef114921e0688a06ad13ce422a80e31081b5b6f9c958167036"} Mar 09 18:58:03 crc kubenswrapper[4821]: I0309 18:58:03.050708 4821 generic.go:334] "Generic (PLEG): container finished" podID="7e460f32-c47b-41a4-a5d6-cb5fa14e77bf" containerID="b02a02eb436b674bd00ea22ae9e3359d4dde69c8264e004a5c80feab8339b097" exitCode=0 Mar 09 18:58:03 crc kubenswrapper[4821]: I0309 18:58:03.050796 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" event={"ID":"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf","Type":"ContainerDied","Data":"b02a02eb436b674bd00ea22ae9e3359d4dde69c8264e004a5c80feab8339b097"} Mar 09 18:58:04 crc kubenswrapper[4821]: I0309 18:58:04.433939 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:04 crc kubenswrapper[4821]: I0309 18:58:04.595717 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4wjs\" (UniqueName: \"kubernetes.io/projected/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf-kube-api-access-q4wjs\") pod \"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf\" (UID: \"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf\") " Mar 09 18:58:04 crc kubenswrapper[4821]: I0309 18:58:04.600780 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf-kube-api-access-q4wjs" (OuterVolumeSpecName: "kube-api-access-q4wjs") pod "7e460f32-c47b-41a4-a5d6-cb5fa14e77bf" (UID: "7e460f32-c47b-41a4-a5d6-cb5fa14e77bf"). InnerVolumeSpecName "kube-api-access-q4wjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:58:04 crc kubenswrapper[4821]: I0309 18:58:04.699195 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4wjs\" (UniqueName: \"kubernetes.io/projected/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf-kube-api-access-q4wjs\") on node \"crc\" DevicePath \"\"" Mar 09 18:58:05 crc kubenswrapper[4821]: I0309 18:58:05.067271 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" event={"ID":"7e460f32-c47b-41a4-a5d6-cb5fa14e77bf","Type":"ContainerDied","Data":"2d6dcb77e6ccf7ef114921e0688a06ad13ce422a80e31081b5b6f9c958167036"} Mar 09 18:58:05 crc kubenswrapper[4821]: I0309 18:58:05.067351 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6dcb77e6ccf7ef114921e0688a06ad13ce422a80e31081b5b6f9c958167036" Mar 09 18:58:05 crc kubenswrapper[4821]: I0309 18:58:05.067354 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551378-vhjcn" Mar 09 18:58:05 crc kubenswrapper[4821]: I0309 18:58:05.510126 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551372-wwtqb"] Mar 09 18:58:05 crc kubenswrapper[4821]: I0309 18:58:05.518927 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551372-wwtqb"] Mar 09 18:58:05 crc kubenswrapper[4821]: I0309 18:58:05.563071 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad" path="/var/lib/kubelet/pods/c32f0746-9ec8-499e-bbf4-6e4e6d72f9ad/volumes" Mar 09 18:58:09 crc kubenswrapper[4821]: I0309 18:58:09.551532 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:58:09 crc kubenswrapper[4821]: E0309 18:58:09.552307 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:58:20 crc kubenswrapper[4821]: I0309 18:58:20.552029 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:58:20 crc kubenswrapper[4821]: E0309 18:58:20.552991 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:58:31 crc kubenswrapper[4821]: I0309 18:58:31.552145 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:58:31 crc kubenswrapper[4821]: E0309 18:58:31.552807 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:58:43 crc kubenswrapper[4821]: I0309 18:58:43.557893 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:58:43 crc kubenswrapper[4821]: E0309 18:58:43.558723 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:58:47 crc kubenswrapper[4821]: I0309 18:58:47.437178 4821 scope.go:117] "RemoveContainer" containerID="930ca14184f97667d12dfb38c65348466252e7cb0ca165bb692664ac61ff4b0e" Mar 09 18:58:55 crc kubenswrapper[4821]: I0309 18:58:55.551812 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:58:55 crc kubenswrapper[4821]: E0309 18:58:55.552549 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:59:06 crc kubenswrapper[4821]: I0309 18:59:06.551811 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:59:06 crc kubenswrapper[4821]: E0309 18:59:06.552649 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.097681 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bl4wb"] Mar 09 18:59:16 crc kubenswrapper[4821]: E0309 18:59:16.099724 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e460f32-c47b-41a4-a5d6-cb5fa14e77bf" containerName="oc" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.099832 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e460f32-c47b-41a4-a5d6-cb5fa14e77bf" containerName="oc" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.100094 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e460f32-c47b-41a4-a5d6-cb5fa14e77bf" containerName="oc" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.102987 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.107185 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl4wb"] Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.222930 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-catalog-content\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.222981 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-utilities\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.223023 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jdph\" (UniqueName: \"kubernetes.io/projected/f58b078c-1476-4ed2-8875-834a2b6b005e-kube-api-access-5jdph\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.324222 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jdph\" (UniqueName: \"kubernetes.io/projected/f58b078c-1476-4ed2-8875-834a2b6b005e-kube-api-access-5jdph\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.324408 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-catalog-content\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.324447 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-utilities\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.325000 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-utilities\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.325645 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-catalog-content\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.344205 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jdph\" (UniqueName: \"kubernetes.io/projected/f58b078c-1476-4ed2-8875-834a2b6b005e-kube-api-access-5jdph\") pod \"redhat-operators-bl4wb\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.437033 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:16 crc kubenswrapper[4821]: I0309 18:59:16.898062 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl4wb"] Mar 09 18:59:17 crc kubenswrapper[4821]: I0309 18:59:17.553145 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:59:17 crc kubenswrapper[4821]: E0309 18:59:17.554763 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 18:59:17 crc kubenswrapper[4821]: I0309 18:59:17.657561 4821 generic.go:334] "Generic (PLEG): container finished" podID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerID="2dd5077e906e9e991f817123baf59d64fb3b666bf535bb6e5c9c961f4adf0b62" exitCode=0 Mar 09 18:59:17 crc kubenswrapper[4821]: I0309 18:59:17.657807 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl4wb" event={"ID":"f58b078c-1476-4ed2-8875-834a2b6b005e","Type":"ContainerDied","Data":"2dd5077e906e9e991f817123baf59d64fb3b666bf535bb6e5c9c961f4adf0b62"} Mar 09 18:59:17 crc kubenswrapper[4821]: I0309 18:59:17.657891 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl4wb" event={"ID":"f58b078c-1476-4ed2-8875-834a2b6b005e","Type":"ContainerStarted","Data":"fbb5b9155927785db32d078bc6e4514588b2e6c29971aefa34c1e491d71e0eb4"} Mar 09 18:59:19 crc kubenswrapper[4821]: I0309 18:59:19.690150 4821 generic.go:334] "Generic (PLEG): container finished" podID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerID="2a1eee31d864961aa07ca47f4616d2c1d9be1a0219238c612ac275633e14bcbc" exitCode=0 Mar 09 18:59:19 crc kubenswrapper[4821]: I0309 18:59:19.690235 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl4wb" event={"ID":"f58b078c-1476-4ed2-8875-834a2b6b005e","Type":"ContainerDied","Data":"2a1eee31d864961aa07ca47f4616d2c1d9be1a0219238c612ac275633e14bcbc"} Mar 09 18:59:20 crc kubenswrapper[4821]: I0309 18:59:20.703615 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl4wb" event={"ID":"f58b078c-1476-4ed2-8875-834a2b6b005e","Type":"ContainerStarted","Data":"df715c497e415b73f99586a351dfe2d916390d7a5c91622c0022fb5d3e211975"} Mar 09 18:59:20 crc kubenswrapper[4821]: I0309 18:59:20.731079 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bl4wb" podStartSLOduration=2.299573219 podStartE2EDuration="4.73104565s" podCreationTimestamp="2026-03-09 18:59:16 +0000 UTC" firstStartedPulling="2026-03-09 18:59:17.659620033 +0000 UTC m=+2094.820995889" lastFinishedPulling="2026-03-09 18:59:20.091092464 +0000 UTC m=+2097.252468320" observedRunningTime="2026-03-09 18:59:20.722873348 +0000 UTC m=+2097.884249204" watchObservedRunningTime="2026-03-09 18:59:20.73104565 +0000 UTC m=+2097.892421506" Mar 09 18:59:26 crc kubenswrapper[4821]: I0309 18:59:26.437158 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:26 crc kubenswrapper[4821]: I0309 18:59:26.437713 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:27 crc kubenswrapper[4821]: I0309 18:59:27.498000 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bl4wb" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="registry-server" probeResult="failure" output=< Mar 09 18:59:27 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 18:59:27 crc kubenswrapper[4821]: > Mar 09 18:59:31 crc kubenswrapper[4821]: I0309 18:59:31.551705 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 18:59:32 crc kubenswrapper[4821]: I0309 18:59:32.341000 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"f6f924e73c0d96463d23d74c00c469a04a44dafcfce63f7df228acf99a8d74b6"} Mar 09 18:59:36 crc kubenswrapper[4821]: I0309 18:59:36.486788 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:36 crc kubenswrapper[4821]: I0309 18:59:36.545468 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.082312 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl4wb"] Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.083294 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bl4wb" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="registry-server" containerID="cri-o://df715c497e415b73f99586a351dfe2d916390d7a5c91622c0022fb5d3e211975" gracePeriod=2 Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.419028 4821 generic.go:334] "Generic (PLEG): container finished" podID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerID="df715c497e415b73f99586a351dfe2d916390d7a5c91622c0022fb5d3e211975" exitCode=0 Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.419260 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl4wb" event={"ID":"f58b078c-1476-4ed2-8875-834a2b6b005e","Type":"ContainerDied","Data":"df715c497e415b73f99586a351dfe2d916390d7a5c91622c0022fb5d3e211975"} Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.553082 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.708594 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-utilities\") pod \"f58b078c-1476-4ed2-8875-834a2b6b005e\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.708661 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jdph\" (UniqueName: \"kubernetes.io/projected/f58b078c-1476-4ed2-8875-834a2b6b005e-kube-api-access-5jdph\") pod \"f58b078c-1476-4ed2-8875-834a2b6b005e\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.708883 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-catalog-content\") pod \"f58b078c-1476-4ed2-8875-834a2b6b005e\" (UID: \"f58b078c-1476-4ed2-8875-834a2b6b005e\") " Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.710269 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-utilities" (OuterVolumeSpecName: "utilities") pod "f58b078c-1476-4ed2-8875-834a2b6b005e" (UID: "f58b078c-1476-4ed2-8875-834a2b6b005e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.715033 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58b078c-1476-4ed2-8875-834a2b6b005e-kube-api-access-5jdph" (OuterVolumeSpecName: "kube-api-access-5jdph") pod "f58b078c-1476-4ed2-8875-834a2b6b005e" (UID: "f58b078c-1476-4ed2-8875-834a2b6b005e"). InnerVolumeSpecName "kube-api-access-5jdph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.810707 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.810981 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jdph\" (UniqueName: \"kubernetes.io/projected/f58b078c-1476-4ed2-8875-834a2b6b005e-kube-api-access-5jdph\") on node \"crc\" DevicePath \"\"" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.873888 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f58b078c-1476-4ed2-8875-834a2b6b005e" (UID: "f58b078c-1476-4ed2-8875-834a2b6b005e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 18:59:40 crc kubenswrapper[4821]: I0309 18:59:40.912798 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f58b078c-1476-4ed2-8875-834a2b6b005e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.431852 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl4wb" event={"ID":"f58b078c-1476-4ed2-8875-834a2b6b005e","Type":"ContainerDied","Data":"fbb5b9155927785db32d078bc6e4514588b2e6c29971aefa34c1e491d71e0eb4"} Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.431905 4821 scope.go:117] "RemoveContainer" containerID="df715c497e415b73f99586a351dfe2d916390d7a5c91622c0022fb5d3e211975" Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.431918 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl4wb" Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.451852 4821 scope.go:117] "RemoveContainer" containerID="2a1eee31d864961aa07ca47f4616d2c1d9be1a0219238c612ac275633e14bcbc" Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.464732 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl4wb"] Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.472909 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bl4wb"] Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.493105 4821 scope.go:117] "RemoveContainer" containerID="2dd5077e906e9e991f817123baf59d64fb3b666bf535bb6e5c9c961f4adf0b62" Mar 09 18:59:41 crc kubenswrapper[4821]: I0309 18:59:41.560418 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" path="/var/lib/kubelet/pods/f58b078c-1476-4ed2-8875-834a2b6b005e/volumes" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.145551 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551380-nzrzx"] Mar 09 19:00:00 crc kubenswrapper[4821]: E0309 19:00:00.146564 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="extract-utilities" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.146583 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="extract-utilities" Mar 09 19:00:00 crc kubenswrapper[4821]: E0309 19:00:00.146618 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="registry-server" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.146627 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="registry-server" Mar 09 19:00:00 crc kubenswrapper[4821]: E0309 19:00:00.146645 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="extract-content" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.146653 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="extract-content" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.146841 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58b078c-1476-4ed2-8875-834a2b6b005e" containerName="registry-server" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.147579 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.149784 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.150363 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.151796 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h"] Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.151843 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.152818 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.154357 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.154720 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.184597 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h"] Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.219600 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551380-nzrzx"] Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.225625 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3625a79c-d381-4f4d-ae55-348a14977ca8-config-volume\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.225941 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3625a79c-d381-4f4d-ae55-348a14977ca8-secret-volume\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.226084 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bcn\" (UniqueName: \"kubernetes.io/projected/3625a79c-d381-4f4d-ae55-348a14977ca8-kube-api-access-56bcn\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.226193 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5s9\" (UniqueName: \"kubernetes.io/projected/fb767d1d-2fb3-4a67-811e-c6646b50e3b2-kube-api-access-bp5s9\") pod \"auto-csr-approver-29551380-nzrzx\" (UID: \"fb767d1d-2fb3-4a67-811e-c6646b50e3b2\") " pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.327732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3625a79c-d381-4f4d-ae55-348a14977ca8-secret-volume\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.327789 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bcn\" (UniqueName: \"kubernetes.io/projected/3625a79c-d381-4f4d-ae55-348a14977ca8-kube-api-access-56bcn\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.327815 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5s9\" (UniqueName: \"kubernetes.io/projected/fb767d1d-2fb3-4a67-811e-c6646b50e3b2-kube-api-access-bp5s9\") pod \"auto-csr-approver-29551380-nzrzx\" (UID: \"fb767d1d-2fb3-4a67-811e-c6646b50e3b2\") " pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.327867 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3625a79c-d381-4f4d-ae55-348a14977ca8-config-volume\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.328709 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3625a79c-d381-4f4d-ae55-348a14977ca8-config-volume\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.343577 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5s9\" (UniqueName: \"kubernetes.io/projected/fb767d1d-2fb3-4a67-811e-c6646b50e3b2-kube-api-access-bp5s9\") pod \"auto-csr-approver-29551380-nzrzx\" (UID: \"fb767d1d-2fb3-4a67-811e-c6646b50e3b2\") " pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.345669 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bcn\" (UniqueName: \"kubernetes.io/projected/3625a79c-d381-4f4d-ae55-348a14977ca8-kube-api-access-56bcn\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.345817 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3625a79c-d381-4f4d-ae55-348a14977ca8-secret-volume\") pod \"collect-profiles-29551380-c2m5h\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.483185 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.494714 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:00 crc kubenswrapper[4821]: I0309 19:00:00.945839 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551380-nzrzx"] Mar 09 19:00:01 crc kubenswrapper[4821]: I0309 19:00:01.032732 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h"] Mar 09 19:00:01 crc kubenswrapper[4821]: I0309 19:00:01.597518 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" event={"ID":"fb767d1d-2fb3-4a67-811e-c6646b50e3b2","Type":"ContainerStarted","Data":"74cbf3571eaab576e8d341481217ac2a517ed6efa94cc6f257adbbc482053d72"} Mar 09 19:00:01 crc kubenswrapper[4821]: I0309 19:00:01.599196 4821 generic.go:334] "Generic (PLEG): container finished" podID="3625a79c-d381-4f4d-ae55-348a14977ca8" containerID="124013b14a52f7607b0658d072e8731e855e56f0232bad718145708307ea7b93" exitCode=0 Mar 09 19:00:01 crc kubenswrapper[4821]: I0309 19:00:01.599253 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" event={"ID":"3625a79c-d381-4f4d-ae55-348a14977ca8","Type":"ContainerDied","Data":"124013b14a52f7607b0658d072e8731e855e56f0232bad718145708307ea7b93"} Mar 09 19:00:01 crc kubenswrapper[4821]: I0309 19:00:01.599287 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" event={"ID":"3625a79c-d381-4f4d-ae55-348a14977ca8","Type":"ContainerStarted","Data":"422f5e10a74e2104da468474d386784ac8d7ac7531857393578b306dc900b549"} Mar 09 19:00:02 crc kubenswrapper[4821]: I0309 19:00:02.958528 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.069652 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3625a79c-d381-4f4d-ae55-348a14977ca8-config-volume\") pod \"3625a79c-d381-4f4d-ae55-348a14977ca8\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.069748 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56bcn\" (UniqueName: \"kubernetes.io/projected/3625a79c-d381-4f4d-ae55-348a14977ca8-kube-api-access-56bcn\") pod \"3625a79c-d381-4f4d-ae55-348a14977ca8\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.069847 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3625a79c-d381-4f4d-ae55-348a14977ca8-secret-volume\") pod \"3625a79c-d381-4f4d-ae55-348a14977ca8\" (UID: \"3625a79c-d381-4f4d-ae55-348a14977ca8\") " Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.071588 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3625a79c-d381-4f4d-ae55-348a14977ca8-config-volume" (OuterVolumeSpecName: "config-volume") pod "3625a79c-d381-4f4d-ae55-348a14977ca8" (UID: "3625a79c-d381-4f4d-ae55-348a14977ca8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.076469 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3625a79c-d381-4f4d-ae55-348a14977ca8-kube-api-access-56bcn" (OuterVolumeSpecName: "kube-api-access-56bcn") pod "3625a79c-d381-4f4d-ae55-348a14977ca8" (UID: "3625a79c-d381-4f4d-ae55-348a14977ca8"). InnerVolumeSpecName "kube-api-access-56bcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.076495 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3625a79c-d381-4f4d-ae55-348a14977ca8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3625a79c-d381-4f4d-ae55-348a14977ca8" (UID: "3625a79c-d381-4f4d-ae55-348a14977ca8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.172086 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56bcn\" (UniqueName: \"kubernetes.io/projected/3625a79c-d381-4f4d-ae55-348a14977ca8-kube-api-access-56bcn\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.172383 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3625a79c-d381-4f4d-ae55-348a14977ca8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.172473 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3625a79c-d381-4f4d-ae55-348a14977ca8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.616245 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" event={"ID":"3625a79c-d381-4f4d-ae55-348a14977ca8","Type":"ContainerDied","Data":"422f5e10a74e2104da468474d386784ac8d7ac7531857393578b306dc900b549"} Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.616284 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551380-c2m5h" Mar 09 19:00:03 crc kubenswrapper[4821]: I0309 19:00:03.616294 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="422f5e10a74e2104da468474d386784ac8d7ac7531857393578b306dc900b549" Mar 09 19:00:04 crc kubenswrapper[4821]: I0309 19:00:04.095233 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf"] Mar 09 19:00:04 crc kubenswrapper[4821]: I0309 19:00:04.105846 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551335-b9jvf"] Mar 09 19:00:04 crc kubenswrapper[4821]: I0309 19:00:04.627670 4821 generic.go:334] "Generic (PLEG): container finished" podID="fb767d1d-2fb3-4a67-811e-c6646b50e3b2" containerID="bc3dc371aea2c912a2dc9d2d3d391ec5cb375f0323a8ff51996da645258ab703" exitCode=0 Mar 09 19:00:04 crc kubenswrapper[4821]: I0309 19:00:04.627726 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" event={"ID":"fb767d1d-2fb3-4a67-811e-c6646b50e3b2","Type":"ContainerDied","Data":"bc3dc371aea2c912a2dc9d2d3d391ec5cb375f0323a8ff51996da645258ab703"} Mar 09 19:00:05 crc kubenswrapper[4821]: I0309 19:00:05.565962 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c" path="/var/lib/kubelet/pods/aa4aa0e2-d2ea-4ea3-82c3-4df70ecc593c/volumes" Mar 09 19:00:05 crc kubenswrapper[4821]: I0309 19:00:05.992583 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:06 crc kubenswrapper[4821]: I0309 19:00:06.116209 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5s9\" (UniqueName: \"kubernetes.io/projected/fb767d1d-2fb3-4a67-811e-c6646b50e3b2-kube-api-access-bp5s9\") pod \"fb767d1d-2fb3-4a67-811e-c6646b50e3b2\" (UID: \"fb767d1d-2fb3-4a67-811e-c6646b50e3b2\") " Mar 09 19:00:06 crc kubenswrapper[4821]: I0309 19:00:06.121733 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb767d1d-2fb3-4a67-811e-c6646b50e3b2-kube-api-access-bp5s9" (OuterVolumeSpecName: "kube-api-access-bp5s9") pod "fb767d1d-2fb3-4a67-811e-c6646b50e3b2" (UID: "fb767d1d-2fb3-4a67-811e-c6646b50e3b2"). InnerVolumeSpecName "kube-api-access-bp5s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:00:06 crc kubenswrapper[4821]: I0309 19:00:06.218253 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5s9\" (UniqueName: \"kubernetes.io/projected/fb767d1d-2fb3-4a67-811e-c6646b50e3b2-kube-api-access-bp5s9\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:06 crc kubenswrapper[4821]: I0309 19:00:06.661178 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" Mar 09 19:00:06 crc kubenswrapper[4821]: I0309 19:00:06.661564 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551380-nzrzx" event={"ID":"fb767d1d-2fb3-4a67-811e-c6646b50e3b2","Type":"ContainerDied","Data":"74cbf3571eaab576e8d341481217ac2a517ed6efa94cc6f257adbbc482053d72"} Mar 09 19:00:06 crc kubenswrapper[4821]: I0309 19:00:06.661616 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cbf3571eaab576e8d341481217ac2a517ed6efa94cc6f257adbbc482053d72" Mar 09 19:00:07 crc kubenswrapper[4821]: I0309 19:00:07.058575 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551374-8r7mx"] Mar 09 19:00:07 crc kubenswrapper[4821]: I0309 19:00:07.065115 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551374-8r7mx"] Mar 09 19:00:07 crc kubenswrapper[4821]: I0309 19:00:07.561254 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b738aef6-3e88-43f4-a093-a25a2062eb56" path="/var/lib/kubelet/pods/b738aef6-3e88-43f4-a093-a25a2062eb56/volumes" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.698665 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m29q4"] Mar 09 19:00:25 crc kubenswrapper[4821]: E0309 19:00:25.699745 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb767d1d-2fb3-4a67-811e-c6646b50e3b2" containerName="oc" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.699765 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb767d1d-2fb3-4a67-811e-c6646b50e3b2" containerName="oc" Mar 09 19:00:25 crc kubenswrapper[4821]: E0309 19:00:25.699789 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3625a79c-d381-4f4d-ae55-348a14977ca8" containerName="collect-profiles" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.699800 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3625a79c-d381-4f4d-ae55-348a14977ca8" containerName="collect-profiles" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.700043 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3625a79c-d381-4f4d-ae55-348a14977ca8" containerName="collect-profiles" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.700085 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb767d1d-2fb3-4a67-811e-c6646b50e3b2" containerName="oc" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.701953 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.707363 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29q4"] Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.777390 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qgfn\" (UniqueName: \"kubernetes.io/projected/fa659500-124d-4579-b99d-45cad1e12ef5-kube-api-access-9qgfn\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.777465 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-utilities\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.777580 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-catalog-content\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.878834 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-catalog-content\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.878949 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qgfn\" (UniqueName: \"kubernetes.io/projected/fa659500-124d-4579-b99d-45cad1e12ef5-kube-api-access-9qgfn\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.878972 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-utilities\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.879568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-utilities\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.879620 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-catalog-content\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:25 crc kubenswrapper[4821]: I0309 19:00:25.897593 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qgfn\" (UniqueName: \"kubernetes.io/projected/fa659500-124d-4579-b99d-45cad1e12ef5-kube-api-access-9qgfn\") pod \"redhat-marketplace-m29q4\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:26 crc kubenswrapper[4821]: I0309 19:00:26.020062 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:26 crc kubenswrapper[4821]: I0309 19:00:26.483023 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29q4"] Mar 09 19:00:26 crc kubenswrapper[4821]: I0309 19:00:26.836066 4821 generic.go:334] "Generic (PLEG): container finished" podID="fa659500-124d-4579-b99d-45cad1e12ef5" containerID="ab08fa1ac15bd7e1f07ffff05fddb9cb267fdd9d0eb4bad887f678efa9cefbe7" exitCode=0 Mar 09 19:00:26 crc kubenswrapper[4821]: I0309 19:00:26.836266 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29q4" event={"ID":"fa659500-124d-4579-b99d-45cad1e12ef5","Type":"ContainerDied","Data":"ab08fa1ac15bd7e1f07ffff05fddb9cb267fdd9d0eb4bad887f678efa9cefbe7"} Mar 09 19:00:26 crc kubenswrapper[4821]: I0309 19:00:26.836561 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29q4" event={"ID":"fa659500-124d-4579-b99d-45cad1e12ef5","Type":"ContainerStarted","Data":"d21d469c69a89194ec1970b634df1a8cc61b83232c3bf964fa83739b00b35102"} Mar 09 19:00:28 crc kubenswrapper[4821]: I0309 19:00:28.857237 4821 generic.go:334] "Generic (PLEG): container finished" podID="fa659500-124d-4579-b99d-45cad1e12ef5" containerID="9dcef509e757584c84bf587d7384b7783ae4e7029c4b40726c011a2f33eb9dc0" exitCode=0 Mar 09 19:00:28 crc kubenswrapper[4821]: I0309 19:00:28.857404 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29q4" event={"ID":"fa659500-124d-4579-b99d-45cad1e12ef5","Type":"ContainerDied","Data":"9dcef509e757584c84bf587d7384b7783ae4e7029c4b40726c011a2f33eb9dc0"} Mar 09 19:00:29 crc kubenswrapper[4821]: I0309 19:00:29.879476 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29q4" event={"ID":"fa659500-124d-4579-b99d-45cad1e12ef5","Type":"ContainerStarted","Data":"679410cc42340b3ce100b862f2df391b4b4845d7270bc884c82fd288d946e67e"} Mar 09 19:00:29 crc kubenswrapper[4821]: I0309 19:00:29.900559 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m29q4" podStartSLOduration=2.445046065 podStartE2EDuration="4.900537206s" podCreationTimestamp="2026-03-09 19:00:25 +0000 UTC" firstStartedPulling="2026-03-09 19:00:26.838535656 +0000 UTC m=+2163.999911522" lastFinishedPulling="2026-03-09 19:00:29.294026797 +0000 UTC m=+2166.455402663" observedRunningTime="2026-03-09 19:00:29.897317779 +0000 UTC m=+2167.058709555" watchObservedRunningTime="2026-03-09 19:00:29.900537206 +0000 UTC m=+2167.061913082" Mar 09 19:00:36 crc kubenswrapper[4821]: I0309 19:00:36.020972 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:36 crc kubenswrapper[4821]: I0309 19:00:36.021426 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:36 crc kubenswrapper[4821]: I0309 19:00:36.079145 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:36 crc kubenswrapper[4821]: I0309 19:00:36.984035 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:39 crc kubenswrapper[4821]: I0309 19:00:39.677455 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29q4"] Mar 09 19:00:39 crc kubenswrapper[4821]: I0309 19:00:39.678182 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m29q4" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="registry-server" containerID="cri-o://679410cc42340b3ce100b862f2df391b4b4845d7270bc884c82fd288d946e67e" gracePeriod=2 Mar 09 19:00:39 crc kubenswrapper[4821]: I0309 19:00:39.976396 4821 generic.go:334] "Generic (PLEG): container finished" podID="fa659500-124d-4579-b99d-45cad1e12ef5" containerID="679410cc42340b3ce100b862f2df391b4b4845d7270bc884c82fd288d946e67e" exitCode=0 Mar 09 19:00:39 crc kubenswrapper[4821]: I0309 19:00:39.976464 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29q4" event={"ID":"fa659500-124d-4579-b99d-45cad1e12ef5","Type":"ContainerDied","Data":"679410cc42340b3ce100b862f2df391b4b4845d7270bc884c82fd288d946e67e"} Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.174057 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.314424 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-utilities\") pod \"fa659500-124d-4579-b99d-45cad1e12ef5\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.314545 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-catalog-content\") pod \"fa659500-124d-4579-b99d-45cad1e12ef5\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.314645 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qgfn\" (UniqueName: \"kubernetes.io/projected/fa659500-124d-4579-b99d-45cad1e12ef5-kube-api-access-9qgfn\") pod \"fa659500-124d-4579-b99d-45cad1e12ef5\" (UID: \"fa659500-124d-4579-b99d-45cad1e12ef5\") " Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.316425 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-utilities" (OuterVolumeSpecName: "utilities") pod "fa659500-124d-4579-b99d-45cad1e12ef5" (UID: "fa659500-124d-4579-b99d-45cad1e12ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.320606 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa659500-124d-4579-b99d-45cad1e12ef5-kube-api-access-9qgfn" (OuterVolumeSpecName: "kube-api-access-9qgfn") pod "fa659500-124d-4579-b99d-45cad1e12ef5" (UID: "fa659500-124d-4579-b99d-45cad1e12ef5"). InnerVolumeSpecName "kube-api-access-9qgfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.340369 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa659500-124d-4579-b99d-45cad1e12ef5" (UID: "fa659500-124d-4579-b99d-45cad1e12ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.416504 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qgfn\" (UniqueName: \"kubernetes.io/projected/fa659500-124d-4579-b99d-45cad1e12ef5-kube-api-access-9qgfn\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.416708 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.416804 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659500-124d-4579-b99d-45cad1e12ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.989105 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m29q4" event={"ID":"fa659500-124d-4579-b99d-45cad1e12ef5","Type":"ContainerDied","Data":"d21d469c69a89194ec1970b634df1a8cc61b83232c3bf964fa83739b00b35102"} Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.989167 4821 scope.go:117] "RemoveContainer" containerID="679410cc42340b3ce100b862f2df391b4b4845d7270bc884c82fd288d946e67e" Mar 09 19:00:40 crc kubenswrapper[4821]: I0309 19:00:40.989219 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m29q4" Mar 09 19:00:41 crc kubenswrapper[4821]: I0309 19:00:41.011458 4821 scope.go:117] "RemoveContainer" containerID="9dcef509e757584c84bf587d7384b7783ae4e7029c4b40726c011a2f33eb9dc0" Mar 09 19:00:41 crc kubenswrapper[4821]: I0309 19:00:41.055661 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29q4"] Mar 09 19:00:41 crc kubenswrapper[4821]: I0309 19:00:41.058509 4821 scope.go:117] "RemoveContainer" containerID="ab08fa1ac15bd7e1f07ffff05fddb9cb267fdd9d0eb4bad887f678efa9cefbe7" Mar 09 19:00:41 crc kubenswrapper[4821]: I0309 19:00:41.072706 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m29q4"] Mar 09 19:00:41 crc kubenswrapper[4821]: I0309 19:00:41.561276 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" path="/var/lib/kubelet/pods/fa659500-124d-4579-b99d-45cad1e12ef5/volumes" Mar 09 19:00:47 crc kubenswrapper[4821]: I0309 19:00:47.539924 4821 scope.go:117] "RemoveContainer" containerID="6c1f3b41ca628899a4c32729eaf86e0fec7c29a59623147234462ad6531945f7" Mar 09 19:00:47 crc kubenswrapper[4821]: I0309 19:00:47.576648 4821 scope.go:117] "RemoveContainer" containerID="668d14e1ea7d12452c64abee05352a18d88c0923d97f595bf0e029888cb58bb6" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.812983 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher13c5-account-delete-wlpv5"] Mar 09 19:00:59 crc kubenswrapper[4821]: E0309 19:00:59.813803 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="registry-server" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.813816 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="registry-server" Mar 09 19:00:59 crc kubenswrapper[4821]: E0309 19:00:59.813827 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="extract-content" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.813836 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="extract-content" Mar 09 19:00:59 crc kubenswrapper[4821]: E0309 19:00:59.813846 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="extract-utilities" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.813854 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="extract-utilities" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.814038 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa659500-124d-4579-b99d-45cad1e12ef5" containerName="registry-server" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.814598 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.826727 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher13c5-account-delete-wlpv5"] Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.854075 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.854285 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" containerName="watcher-decision-engine" containerID="cri-o://a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c" gracePeriod=30 Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.921887 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.922157 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-kuttl-api-log" containerID="cri-o://80885cb982c8254a9fc57e3a190bc0c1a82dd6b598b9c86d12c04e894e74615b" gracePeriod=30 Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.922306 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-api" containerID="cri-o://1b9302b570efd6dc5095f283b7eba86587f502a40e0bb2b73a878e30cec22beb" gracePeriod=30 Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.949207 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crg5r\" (UniqueName: \"kubernetes.io/projected/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-kube-api-access-crg5r\") pod \"watcher13c5-account-delete-wlpv5\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.949275 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-operator-scripts\") pod \"watcher13c5-account-delete-wlpv5\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.969926 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:00:59 crc kubenswrapper[4821]: I0309 19:00:59.971763 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="50cca3dd-5fcd-4577-9442-2952486769ba" containerName="watcher-applier" containerID="cri-o://8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" gracePeriod=30 Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.050675 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-operator-scripts\") pod \"watcher13c5-account-delete-wlpv5\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.050810 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crg5r\" (UniqueName: \"kubernetes.io/projected/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-kube-api-access-crg5r\") pod \"watcher13c5-account-delete-wlpv5\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.051735 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-operator-scripts\") pod \"watcher13c5-account-delete-wlpv5\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.078684 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crg5r\" (UniqueName: \"kubernetes.io/projected/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-kube-api-access-crg5r\") pod \"watcher13c5-account-delete-wlpv5\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.131352 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-cron-29551381-c5bp4"] Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.131686 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.132630 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.162351 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29551381-c5bp4"] Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.227523 4821 generic.go:334] "Generic (PLEG): container finished" podID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerID="80885cb982c8254a9fc57e3a190bc0c1a82dd6b598b9c86d12c04e894e74615b" exitCode=143 Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.227814 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82","Type":"ContainerDied","Data":"80885cb982c8254a9fc57e3a190bc0c1a82dd6b598b9c86d12c04e894e74615b"} Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.258264 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-combined-ca-bundle\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.258308 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-fernet-keys\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.258400 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rnp\" (UniqueName: \"kubernetes.io/projected/98d8cd55-a4bc-446d-a770-ed57e35aeccb-kube-api-access-t6rnp\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.258426 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-config-data\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.359572 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rnp\" (UniqueName: \"kubernetes.io/projected/98d8cd55-a4bc-446d-a770-ed57e35aeccb-kube-api-access-t6rnp\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.359621 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-config-data\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.359678 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-combined-ca-bundle\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.359702 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-fernet-keys\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.365904 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-config-data\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.373108 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-fernet-keys\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.386907 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-combined-ca-bundle\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.401941 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rnp\" (UniqueName: \"kubernetes.io/projected/98d8cd55-a4bc-446d-a770-ed57e35aeccb-kube-api-access-t6rnp\") pod \"keystone-cron-29551381-c5bp4\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.524125 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:00 crc kubenswrapper[4821]: I0309 19:01:00.742837 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher13c5-account-delete-wlpv5"] Mar 09 19:01:00 crc kubenswrapper[4821]: E0309 19:01:00.975675 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:01:00 crc kubenswrapper[4821]: E0309 19:01:00.980278 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:01:00 crc kubenswrapper[4821]: E0309 19:01:00.995269 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:01:00 crc kubenswrapper[4821]: E0309 19:01:00.995394 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="50cca3dd-5fcd-4577-9442-2952486769ba" containerName="watcher-applier" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.145902 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29551381-c5bp4"] Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.263074 4821 generic.go:334] "Generic (PLEG): container finished" podID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerID="1b9302b570efd6dc5095f283b7eba86587f502a40e0bb2b73a878e30cec22beb" exitCode=0 Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.263133 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82","Type":"ContainerDied","Data":"1b9302b570efd6dc5095f283b7eba86587f502a40e0bb2b73a878e30cec22beb"} Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.263824 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" event={"ID":"98d8cd55-a4bc-446d-a770-ed57e35aeccb","Type":"ContainerStarted","Data":"3b5f0e5e000a60a82ed65138da818eac0a4e2fabe5ad3854524e3738c131a7d4"} Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.264733 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" event={"ID":"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd","Type":"ContainerStarted","Data":"e796073e64a847b9b0ec4f1adcd4dbbb441425210cc00dcda5da9b95520dab01"} Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.264756 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" event={"ID":"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd","Type":"ContainerStarted","Data":"f2a8093690d20e8cca3e5e2465c5faccd7036f032d2bcb8d1fc099840ca4634f"} Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.297702 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" podStartSLOduration=2.297685621 podStartE2EDuration="2.297685621s" podCreationTimestamp="2026-03-09 19:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:01.286473267 +0000 UTC m=+2198.447849123" watchObservedRunningTime="2026-03-09 19:01:01.297685621 +0000 UTC m=+2198.459061477" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.431191 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.589298 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-combined-ca-bundle\") pod \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.589503 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzjth\" (UniqueName: \"kubernetes.io/projected/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-kube-api-access-tzjth\") pod \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.589628 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-config-data\") pod \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.589655 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-logs\") pod \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.589680 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-custom-prometheus-ca\") pod \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\" (UID: \"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82\") " Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.591167 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-logs" (OuterVolumeSpecName: "logs") pod "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" (UID: "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.595356 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-kube-api-access-tzjth" (OuterVolumeSpecName: "kube-api-access-tzjth") pod "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" (UID: "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82"). InnerVolumeSpecName "kube-api-access-tzjth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.615925 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" (UID: "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.616955 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" (UID: "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.632219 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-config-data" (OuterVolumeSpecName: "config-data") pod "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" (UID: "f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.692074 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.692288 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.692297 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.692309 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:01 crc kubenswrapper[4821]: I0309 19:01:01.692337 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzjth\" (UniqueName: \"kubernetes.io/projected/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82-kube-api-access-tzjth\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.274202 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82","Type":"ContainerDied","Data":"2c4129846ee7c55b87663b1147a627a3789997404188a65b33b738ccce1104ff"} Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.274225 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.274256 4821 scope.go:117] "RemoveContainer" containerID="1b9302b570efd6dc5095f283b7eba86587f502a40e0bb2b73a878e30cec22beb" Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.277124 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" event={"ID":"98d8cd55-a4bc-446d-a770-ed57e35aeccb","Type":"ContainerStarted","Data":"c528c568d3157c250b236ffc8a5c9ca33b177c1bb97944fe39c59d168a215a8a"} Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.281534 4821 generic.go:334] "Generic (PLEG): container finished" podID="4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" containerID="e796073e64a847b9b0ec4f1adcd4dbbb441425210cc00dcda5da9b95520dab01" exitCode=0 Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.281581 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" event={"ID":"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd","Type":"ContainerDied","Data":"e796073e64a847b9b0ec4f1adcd4dbbb441425210cc00dcda5da9b95520dab01"} Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.297290 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" podStartSLOduration=2.297270315 podStartE2EDuration="2.297270315s" podCreationTimestamp="2026-03-09 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:02.29595458 +0000 UTC m=+2199.457330436" watchObservedRunningTime="2026-03-09 19:01:02.297270315 +0000 UTC m=+2199.458646181" Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.305626 4821 scope.go:117] "RemoveContainer" containerID="80885cb982c8254a9fc57e3a190bc0c1a82dd6b598b9c86d12c04e894e74615b" Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.333212 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.339164 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.747929 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.748477 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="proxy-httpd" containerID="cri-o://e809de64d6164d1576f4701075c9609befd0949802c46c0dada9621a77b07c57" gracePeriod=30 Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.748500 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-central-agent" containerID="cri-o://5856ee8c84318b7822d4a408fdfba7e86301a35f27aa57c7f33722e9c82e2e34" gracePeriod=30 Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.748578 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-notification-agent" containerID="cri-o://ec6d25b74630de1a6a2dc5a2df4d4e222c04110cc1f5ac20bd5a1ec7e2b9f83a" gracePeriod=30 Mar 09 19:01:02 crc kubenswrapper[4821]: I0309 19:01:02.748644 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="sg-core" containerID="cri-o://66f408ca3542f07a9b783ef8157e77604b3ab128b0d8f427567d1de8560f7821" gracePeriod=30 Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.291950 4821 generic.go:334] "Generic (PLEG): container finished" podID="95399cf0-2abf-4b19-9106-7f1489de365d" containerID="e809de64d6164d1576f4701075c9609befd0949802c46c0dada9621a77b07c57" exitCode=0 Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292295 4821 generic.go:334] "Generic (PLEG): container finished" podID="95399cf0-2abf-4b19-9106-7f1489de365d" containerID="66f408ca3542f07a9b783ef8157e77604b3ab128b0d8f427567d1de8560f7821" exitCode=2 Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292309 4821 generic.go:334] "Generic (PLEG): container finished" podID="95399cf0-2abf-4b19-9106-7f1489de365d" containerID="ec6d25b74630de1a6a2dc5a2df4d4e222c04110cc1f5ac20bd5a1ec7e2b9f83a" exitCode=0 Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292385 4821 generic.go:334] "Generic (PLEG): container finished" podID="95399cf0-2abf-4b19-9106-7f1489de365d" containerID="5856ee8c84318b7822d4a408fdfba7e86301a35f27aa57c7f33722e9c82e2e34" exitCode=0 Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292133 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerDied","Data":"e809de64d6164d1576f4701075c9609befd0949802c46c0dada9621a77b07c57"} Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292466 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerDied","Data":"66f408ca3542f07a9b783ef8157e77604b3ab128b0d8f427567d1de8560f7821"} Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292497 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerDied","Data":"ec6d25b74630de1a6a2dc5a2df4d4e222c04110cc1f5ac20bd5a1ec7e2b9f83a"} Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.292510 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerDied","Data":"5856ee8c84318b7822d4a408fdfba7e86301a35f27aa57c7f33722e9c82e2e34"} Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.538796 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.586298 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" path="/var/lib/kubelet/pods/f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82/volumes" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.620770 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-config-data\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.620811 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-sg-core-conf-yaml\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.620848 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-combined-ca-bundle\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.620880 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-run-httpd\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.620912 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-ceilometer-tls-certs\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.620987 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wptd5\" (UniqueName: \"kubernetes.io/projected/95399cf0-2abf-4b19-9106-7f1489de365d-kube-api-access-wptd5\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.621025 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-scripts\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.621084 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-log-httpd\") pod \"95399cf0-2abf-4b19-9106-7f1489de365d\" (UID: \"95399cf0-2abf-4b19-9106-7f1489de365d\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.622836 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.626065 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95399cf0-2abf-4b19-9106-7f1489de365d-kube-api-access-wptd5" (OuterVolumeSpecName: "kube-api-access-wptd5") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "kube-api-access-wptd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.626117 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-scripts" (OuterVolumeSpecName: "scripts") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.635530 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.676288 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.685768 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.697194 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.705348 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722584 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722617 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722629 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722640 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722664 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wptd5\" (UniqueName: \"kubernetes.io/projected/95399cf0-2abf-4b19-9106-7f1489de365d-kube-api-access-wptd5\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722674 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.722686 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/95399cf0-2abf-4b19-9106-7f1489de365d-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.740017 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-config-data" (OuterVolumeSpecName: "config-data") pod "95399cf0-2abf-4b19-9106-7f1489de365d" (UID: "95399cf0-2abf-4b19-9106-7f1489de365d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.824387 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crg5r\" (UniqueName: \"kubernetes.io/projected/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-kube-api-access-crg5r\") pod \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.824701 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-operator-scripts\") pod \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\" (UID: \"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd\") " Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.825011 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" (UID: "4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.825303 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95399cf0-2abf-4b19-9106-7f1489de365d-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.825408 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.829508 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-kube-api-access-crg5r" (OuterVolumeSpecName: "kube-api-access-crg5r") pod "4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" (UID: "4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd"). InnerVolumeSpecName "kube-api-access-crg5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:03 crc kubenswrapper[4821]: I0309 19:01:03.926790 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crg5r\" (UniqueName: \"kubernetes.io/projected/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd-kube-api-access-crg5r\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.299833 4821 generic.go:334] "Generic (PLEG): container finished" podID="98d8cd55-a4bc-446d-a770-ed57e35aeccb" containerID="c528c568d3157c250b236ffc8a5c9ca33b177c1bb97944fe39c59d168a215a8a" exitCode=0 Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.299921 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" event={"ID":"98d8cd55-a4bc-446d-a770-ed57e35aeccb","Type":"ContainerDied","Data":"c528c568d3157c250b236ffc8a5c9ca33b177c1bb97944fe39c59d168a215a8a"} Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.301641 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.301636 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher13c5-account-delete-wlpv5" event={"ID":"4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd","Type":"ContainerDied","Data":"f2a8093690d20e8cca3e5e2465c5faccd7036f032d2bcb8d1fc099840ca4634f"} Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.301773 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a8093690d20e8cca3e5e2465c5faccd7036f032d2bcb8d1fc099840ca4634f" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.303992 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"95399cf0-2abf-4b19-9106-7f1489de365d","Type":"ContainerDied","Data":"b3e41ddb375165a342391c989671ac182249a294d8386871e58317d2b37c8260"} Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.304052 4821 scope.go:117] "RemoveContainer" containerID="e809de64d6164d1576f4701075c9609befd0949802c46c0dada9621a77b07c57" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.304010 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.322830 4821 scope.go:117] "RemoveContainer" containerID="66f408ca3542f07a9b783ef8157e77604b3ab128b0d8f427567d1de8560f7821" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.351565 4821 scope.go:117] "RemoveContainer" containerID="ec6d25b74630de1a6a2dc5a2df4d4e222c04110cc1f5ac20bd5a1ec7e2b9f83a" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.388984 4821 scope.go:117] "RemoveContainer" containerID="5856ee8c84318b7822d4a408fdfba7e86301a35f27aa57c7f33722e9c82e2e34" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.398377 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.405872 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.427553 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.428373 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" containerName="mariadb-account-delete" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.428464 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" containerName="mariadb-account-delete" Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.428678 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-kuttl-api-log" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429032 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-kuttl-api-log" Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.429113 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="proxy-httpd" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429171 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="proxy-httpd" Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.429250 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-central-agent" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429302 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-central-agent" Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.429381 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="sg-core" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429433 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="sg-core" Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.429499 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-notification-agent" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429583 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-notification-agent" Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.429656 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-api" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429716 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-api" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.429936 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="proxy-httpd" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.430000 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-api" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.430054 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" containerName="mariadb-account-delete" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.430113 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-notification-agent" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.430170 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="ceilometer-central-agent" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.430225 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" containerName="sg-core" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.430278 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cb9e0d-d75e-4cb1-b8aa-a4dd8b0edd82" containerName="watcher-kuttl-api-log" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.431861 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.432096 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.437729 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.437988 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.441747 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.536852 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-scripts\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.536896 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-config-data\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.536935 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.537058 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55zc\" (UniqueName: \"kubernetes.io/projected/44839807-0de1-41d0-9924-8046fe85f1ba-kube-api-access-j55zc\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.537167 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.537245 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-log-httpd\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.537302 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.537353 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-run-httpd\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638177 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638216 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-run-httpd\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638262 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-scripts\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-config-data\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638355 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638403 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55zc\" (UniqueName: \"kubernetes.io/projected/44839807-0de1-41d0-9924-8046fe85f1ba-kube-api-access-j55zc\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638441 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638468 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-log-httpd\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638866 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-run-httpd\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.638897 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-log-httpd\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.643242 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.643430 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-config-data\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.643506 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.645930 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-scripts\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.654072 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.657267 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55zc\" (UniqueName: \"kubernetes.io/projected/44839807-0de1-41d0-9924-8046fe85f1ba-kube-api-access-j55zc\") pod \"ceilometer-0\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.768955 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.889946 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher13c5-account-delete-wlpv5"] Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.898766 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher13c5-account-delete-wlpv5"] Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.903704 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.984083 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-k7h6t"] Mar 09 19:01:04 crc kubenswrapper[4821]: E0309 19:01:04.984400 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cca3dd-5fcd-4577-9442-2952486769ba" containerName="watcher-applier" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.984412 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cca3dd-5fcd-4577-9442-2952486769ba" containerName="watcher-applier" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.984556 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cca3dd-5fcd-4577-9442-2952486769ba" containerName="watcher-applier" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.985911 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:04 crc kubenswrapper[4821]: I0309 19:01:04.994545 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-k7h6t"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.044361 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50cca3dd-5fcd-4577-9442-2952486769ba-logs\") pod \"50cca3dd-5fcd-4577-9442-2952486769ba\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.044414 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nmkc\" (UniqueName: \"kubernetes.io/projected/50cca3dd-5fcd-4577-9442-2952486769ba-kube-api-access-7nmkc\") pod \"50cca3dd-5fcd-4577-9442-2952486769ba\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.044491 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-combined-ca-bundle\") pod \"50cca3dd-5fcd-4577-9442-2952486769ba\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.044555 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-config-data\") pod \"50cca3dd-5fcd-4577-9442-2952486769ba\" (UID: \"50cca3dd-5fcd-4577-9442-2952486769ba\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.044816 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-operator-scripts\") pod \"watcher-db-create-k7h6t\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.044877 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprgx\" (UniqueName: \"kubernetes.io/projected/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-kube-api-access-wprgx\") pod \"watcher-db-create-k7h6t\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.045884 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cca3dd-5fcd-4577-9442-2952486769ba-logs" (OuterVolumeSpecName: "logs") pod "50cca3dd-5fcd-4577-9442-2952486769ba" (UID: "50cca3dd-5fcd-4577-9442-2952486769ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.050351 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cca3dd-5fcd-4577-9442-2952486769ba-kube-api-access-7nmkc" (OuterVolumeSpecName: "kube-api-access-7nmkc") pod "50cca3dd-5fcd-4577-9442-2952486769ba" (UID: "50cca3dd-5fcd-4577-9442-2952486769ba"). InnerVolumeSpecName "kube-api-access-7nmkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.097775 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50cca3dd-5fcd-4577-9442-2952486769ba" (UID: "50cca3dd-5fcd-4577-9442-2952486769ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.104396 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-config-data" (OuterVolumeSpecName: "config-data") pod "50cca3dd-5fcd-4577-9442-2952486769ba" (UID: "50cca3dd-5fcd-4577-9442-2952486769ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.113574 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.114641 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.121228 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.122577 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.145921 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-operator-scripts\") pod \"watcher-db-create-k7h6t\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.145997 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprgx\" (UniqueName: \"kubernetes.io/projected/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-kube-api-access-wprgx\") pod \"watcher-db-create-k7h6t\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.146098 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50cca3dd-5fcd-4577-9442-2952486769ba-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.146109 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nmkc\" (UniqueName: \"kubernetes.io/projected/50cca3dd-5fcd-4577-9442-2952486769ba-kube-api-access-7nmkc\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.146120 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.146129 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50cca3dd-5fcd-4577-9442-2952486769ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.146945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-operator-scripts\") pod \"watcher-db-create-k7h6t\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.163343 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprgx\" (UniqueName: \"kubernetes.io/projected/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-kube-api-access-wprgx\") pod \"watcher-db-create-k7h6t\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.247038 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl99g\" (UniqueName: \"kubernetes.io/projected/bebf7583-afcc-454d-970d-72dbc3ce7ff9-kube-api-access-tl99g\") pod \"watcher-b4b0-account-create-update-6w6ll\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.247372 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bebf7583-afcc-454d-970d-72dbc3ce7ff9-operator-scripts\") pod \"watcher-b4b0-account-create-update-6w6ll\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.304635 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.312480 4821 generic.go:334] "Generic (PLEG): container finished" podID="50cca3dd-5fcd-4577-9442-2952486769ba" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" exitCode=0 Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.312537 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"50cca3dd-5fcd-4577-9442-2952486769ba","Type":"ContainerDied","Data":"8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca"} Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.312562 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"50cca3dd-5fcd-4577-9442-2952486769ba","Type":"ContainerDied","Data":"189094ffbe9c1e67e76e59f46c5db9497e1713c31863ac48666c043dbaeecb47"} Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.312577 4821 scope.go:117] "RemoveContainer" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.312671 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.350866 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bebf7583-afcc-454d-970d-72dbc3ce7ff9-operator-scripts\") pod \"watcher-b4b0-account-create-update-6w6ll\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.350994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl99g\" (UniqueName: \"kubernetes.io/projected/bebf7583-afcc-454d-970d-72dbc3ce7ff9-kube-api-access-tl99g\") pod \"watcher-b4b0-account-create-update-6w6ll\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.352369 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bebf7583-afcc-454d-970d-72dbc3ce7ff9-operator-scripts\") pod \"watcher-b4b0-account-create-update-6w6ll\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.374735 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl99g\" (UniqueName: \"kubernetes.io/projected/bebf7583-afcc-454d-970d-72dbc3ce7ff9-kube-api-access-tl99g\") pod \"watcher-b4b0-account-create-update-6w6ll\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.387022 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.395298 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.441745 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.461071 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.467261 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.473097 4821 scope.go:117] "RemoveContainer" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" Mar 09 19:01:05 crc kubenswrapper[4821]: E0309 19:01:05.473659 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca\": container with ID starting with 8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca not found: ID does not exist" containerID="8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.473690 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca"} err="failed to get container status \"8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca\": rpc error: code = NotFound desc = could not find container \"8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca\": container with ID starting with 8faf8bdf823e63208b75bbe788979393e29cef8862727db68ef800f0e73fcdca not found: ID does not exist" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.580477 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd" path="/var/lib/kubelet/pods/4be7d3a9-d9b8-432b-9aa1-b17d5d9ec9bd/volumes" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.585496 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cca3dd-5fcd-4577-9442-2952486769ba" path="/var/lib/kubelet/pods/50cca3dd-5fcd-4577-9442-2952486769ba/volumes" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.586004 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95399cf0-2abf-4b19-9106-7f1489de365d" path="/var/lib/kubelet/pods/95399cf0-2abf-4b19-9106-7f1489de365d/volumes" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.666019 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.760738 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-config-data\") pod \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.760802 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-fernet-keys\") pod \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.760894 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-combined-ca-bundle\") pod \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.760949 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6rnp\" (UniqueName: \"kubernetes.io/projected/98d8cd55-a4bc-446d-a770-ed57e35aeccb-kube-api-access-t6rnp\") pod \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\" (UID: \"98d8cd55-a4bc-446d-a770-ed57e35aeccb\") " Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.764892 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "98d8cd55-a4bc-446d-a770-ed57e35aeccb" (UID: "98d8cd55-a4bc-446d-a770-ed57e35aeccb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.765214 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d8cd55-a4bc-446d-a770-ed57e35aeccb-kube-api-access-t6rnp" (OuterVolumeSpecName: "kube-api-access-t6rnp") pod "98d8cd55-a4bc-446d-a770-ed57e35aeccb" (UID: "98d8cd55-a4bc-446d-a770-ed57e35aeccb"). InnerVolumeSpecName "kube-api-access-t6rnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.784707 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98d8cd55-a4bc-446d-a770-ed57e35aeccb" (UID: "98d8cd55-a4bc-446d-a770-ed57e35aeccb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.801958 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-config-data" (OuterVolumeSpecName: "config-data") pod "98d8cd55-a4bc-446d-a770-ed57e35aeccb" (UID: "98d8cd55-a4bc-446d-a770-ed57e35aeccb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.865967 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-k7h6t"] Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.866374 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.866426 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6rnp\" (UniqueName: \"kubernetes.io/projected/98d8cd55-a4bc-446d-a770-ed57e35aeccb-kube-api-access-t6rnp\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.866457 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: I0309 19:01:05.866483 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98d8cd55-a4bc-446d-a770-ed57e35aeccb-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:05 crc kubenswrapper[4821]: W0309 19:01:05.867565 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b8a4147_9d76_4a01_93ef_7f8e0652f1f0.slice/crio-31314f7aa812cc321a9527cfd7497c95b2098a090e8ee86fdcc5f71a85ab9183 WatchSource:0}: Error finding container 31314f7aa812cc321a9527cfd7497c95b2098a090e8ee86fdcc5f71a85ab9183: Status 404 returned error can't find the container with id 31314f7aa812cc321a9527cfd7497c95b2098a090e8ee86fdcc5f71a85ab9183 Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.039414 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll"] Mar 09 19:01:06 crc kubenswrapper[4821]: W0309 19:01:06.042599 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbebf7583_afcc_454d_970d_72dbc3ce7ff9.slice/crio-7b9a40d9a9946937afadb88f14a0f61e5cb5c58a4f27324b2373969abcf71c93 WatchSource:0}: Error finding container 7b9a40d9a9946937afadb88f14a0f61e5cb5c58a4f27324b2373969abcf71c93: Status 404 returned error can't find the container with id 7b9a40d9a9946937afadb88f14a0f61e5cb5c58a4f27324b2373969abcf71c93 Mar 09 19:01:06 crc kubenswrapper[4821]: E0309 19:01:06.142701 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:01:06 crc kubenswrapper[4821]: E0309 19:01:06.144564 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:01:06 crc kubenswrapper[4821]: E0309 19:01:06.147589 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:01:06 crc kubenswrapper[4821]: E0309 19:01:06.147666 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" containerName="watcher-decision-engine" Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.338207 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" event={"ID":"bebf7583-afcc-454d-970d-72dbc3ce7ff9","Type":"ContainerStarted","Data":"4044bb81d16ef5a360424fede998c379bf65781aa58d0ff260bde715169f3ee5"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.338513 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" event={"ID":"bebf7583-afcc-454d-970d-72dbc3ce7ff9","Type":"ContainerStarted","Data":"7b9a40d9a9946937afadb88f14a0f61e5cb5c58a4f27324b2373969abcf71c93"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.342031 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerStarted","Data":"e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.342056 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerStarted","Data":"b24b9b4c141f50c6ee89851e02665117755424b26f559a2eac3cf3f820071d77"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.344120 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" event={"ID":"98d8cd55-a4bc-446d-a770-ed57e35aeccb","Type":"ContainerDied","Data":"3b5f0e5e000a60a82ed65138da818eac0a4e2fabe5ad3854524e3738c131a7d4"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.344141 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5f0e5e000a60a82ed65138da818eac0a4e2fabe5ad3854524e3738c131a7d4" Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.344199 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29551381-c5bp4" Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.345600 4821 generic.go:334] "Generic (PLEG): container finished" podID="4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" containerID="2835e603a77c00580d0373cb1e5d2a441cc4a52673a5d27fd6869cbd9bf7be70" exitCode=0 Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.345626 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-k7h6t" event={"ID":"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0","Type":"ContainerDied","Data":"2835e603a77c00580d0373cb1e5d2a441cc4a52673a5d27fd6869cbd9bf7be70"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.345641 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-k7h6t" event={"ID":"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0","Type":"ContainerStarted","Data":"31314f7aa812cc321a9527cfd7497c95b2098a090e8ee86fdcc5f71a85ab9183"} Mar 09 19:01:06 crc kubenswrapper[4821]: I0309 19:01:06.357918 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" podStartSLOduration=1.357897935 podStartE2EDuration="1.357897935s" podCreationTimestamp="2026-03-09 19:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:06.352101628 +0000 UTC m=+2203.513477484" watchObservedRunningTime="2026-03-09 19:01:06.357897935 +0000 UTC m=+2203.519273801" Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.384503 4821 generic.go:334] "Generic (PLEG): container finished" podID="bebf7583-afcc-454d-970d-72dbc3ce7ff9" containerID="4044bb81d16ef5a360424fede998c379bf65781aa58d0ff260bde715169f3ee5" exitCode=0 Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.384577 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" event={"ID":"bebf7583-afcc-454d-970d-72dbc3ce7ff9","Type":"ContainerDied","Data":"4044bb81d16ef5a360424fede998c379bf65781aa58d0ff260bde715169f3ee5"} Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.388290 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerStarted","Data":"f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5"} Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.874580 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.901751 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-operator-scripts\") pod \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.902369 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" (UID: "4b8a4147-9d76-4a01-93ef-7f8e0652f1f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.902543 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wprgx\" (UniqueName: \"kubernetes.io/projected/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-kube-api-access-wprgx\") pod \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\" (UID: \"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0\") " Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.903476 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:07 crc kubenswrapper[4821]: I0309 19:01:07.907792 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-kube-api-access-wprgx" (OuterVolumeSpecName: "kube-api-access-wprgx") pod "4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" (UID: "4b8a4147-9d76-4a01-93ef-7f8e0652f1f0"). InnerVolumeSpecName "kube-api-access-wprgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.005422 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wprgx\" (UniqueName: \"kubernetes.io/projected/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0-kube-api-access-wprgx\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.396107 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-k7h6t" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.396110 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-k7h6t" event={"ID":"4b8a4147-9d76-4a01-93ef-7f8e0652f1f0","Type":"ContainerDied","Data":"31314f7aa812cc321a9527cfd7497c95b2098a090e8ee86fdcc5f71a85ab9183"} Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.396284 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31314f7aa812cc321a9527cfd7497c95b2098a090e8ee86fdcc5f71a85ab9183" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.398486 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerStarted","Data":"8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3"} Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.678505 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.719289 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl99g\" (UniqueName: \"kubernetes.io/projected/bebf7583-afcc-454d-970d-72dbc3ce7ff9-kube-api-access-tl99g\") pod \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.719478 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bebf7583-afcc-454d-970d-72dbc3ce7ff9-operator-scripts\") pod \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\" (UID: \"bebf7583-afcc-454d-970d-72dbc3ce7ff9\") " Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.719957 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebf7583-afcc-454d-970d-72dbc3ce7ff9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bebf7583-afcc-454d-970d-72dbc3ce7ff9" (UID: "bebf7583-afcc-454d-970d-72dbc3ce7ff9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.723775 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebf7583-afcc-454d-970d-72dbc3ce7ff9-kube-api-access-tl99g" (OuterVolumeSpecName: "kube-api-access-tl99g") pod "bebf7583-afcc-454d-970d-72dbc3ce7ff9" (UID: "bebf7583-afcc-454d-970d-72dbc3ce7ff9"). InnerVolumeSpecName "kube-api-access-tl99g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.822092 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl99g\" (UniqueName: \"kubernetes.io/projected/bebf7583-afcc-454d-970d-72dbc3ce7ff9-kube-api-access-tl99g\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:08 crc kubenswrapper[4821]: I0309 19:01:08.822158 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bebf7583-afcc-454d-970d-72dbc3ce7ff9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:09 crc kubenswrapper[4821]: I0309 19:01:09.407965 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" event={"ID":"bebf7583-afcc-454d-970d-72dbc3ce7ff9","Type":"ContainerDied","Data":"7b9a40d9a9946937afadb88f14a0f61e5cb5c58a4f27324b2373969abcf71c93"} Mar 09 19:01:09 crc kubenswrapper[4821]: I0309 19:01:09.408029 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9a40d9a9946937afadb88f14a0f61e5cb5c58a4f27324b2373969abcf71c93" Mar 09 19:01:09 crc kubenswrapper[4821]: I0309 19:01:09.408103 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.416879 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerStarted","Data":"14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118"} Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.417497 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.443480 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9129014 podStartE2EDuration="6.443458211s" podCreationTimestamp="2026-03-09 19:01:04 +0000 UTC" firstStartedPulling="2026-03-09 19:01:05.395015376 +0000 UTC m=+2202.556391232" lastFinishedPulling="2026-03-09 19:01:09.925572177 +0000 UTC m=+2207.086948043" observedRunningTime="2026-03-09 19:01:10.441033805 +0000 UTC m=+2207.602409681" watchObservedRunningTime="2026-03-09 19:01:10.443458211 +0000 UTC m=+2207.604834077" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527343 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7"] Mar 09 19:01:10 crc kubenswrapper[4821]: E0309 19:01:10.527640 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" containerName="mariadb-database-create" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527654 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" containerName="mariadb-database-create" Mar 09 19:01:10 crc kubenswrapper[4821]: E0309 19:01:10.527666 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d8cd55-a4bc-446d-a770-ed57e35aeccb" containerName="keystone-cron" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527673 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d8cd55-a4bc-446d-a770-ed57e35aeccb" containerName="keystone-cron" Mar 09 19:01:10 crc kubenswrapper[4821]: E0309 19:01:10.527689 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebf7583-afcc-454d-970d-72dbc3ce7ff9" containerName="mariadb-account-create-update" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527695 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebf7583-afcc-454d-970d-72dbc3ce7ff9" containerName="mariadb-account-create-update" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527836 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebf7583-afcc-454d-970d-72dbc3ce7ff9" containerName="mariadb-account-create-update" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527849 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" containerName="mariadb-database-create" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.527859 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d8cd55-a4bc-446d-a770-ed57e35aeccb" containerName="keystone-cron" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.528360 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.530258 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-pgq5g" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.530621 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.541882 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7"] Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.549740 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.549947 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-config-data\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.550045 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.550124 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvxk\" (UniqueName: \"kubernetes.io/projected/d84cf524-375a-4f76-98af-e8df84af5bce-kube-api-access-zpvxk\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.652407 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.652459 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-config-data\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.652588 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.653428 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpvxk\" (UniqueName: \"kubernetes.io/projected/d84cf524-375a-4f76-98af-e8df84af5bce-kube-api-access-zpvxk\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.658054 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.658378 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-config-data\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.658968 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.686651 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpvxk\" (UniqueName: \"kubernetes.io/projected/d84cf524-375a-4f76-98af-e8df84af5bce-kube-api-access-zpvxk\") pod \"watcher-kuttl-db-sync-bfmx7\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:10 crc kubenswrapper[4821]: I0309 19:01:10.850115 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:11 crc kubenswrapper[4821]: I0309 19:01:11.199240 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7"] Mar 09 19:01:11 crc kubenswrapper[4821]: W0309 19:01:11.212292 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd84cf524_375a_4f76_98af_e8df84af5bce.slice/crio-aab067782d70fe140377eaafe7f931f72de3358fbe71eeadc53c619e3bac41fe WatchSource:0}: Error finding container aab067782d70fe140377eaafe7f931f72de3358fbe71eeadc53c619e3bac41fe: Status 404 returned error can't find the container with id aab067782d70fe140377eaafe7f931f72de3358fbe71eeadc53c619e3bac41fe Mar 09 19:01:11 crc kubenswrapper[4821]: I0309 19:01:11.443025 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" event={"ID":"d84cf524-375a-4f76-98af-e8df84af5bce","Type":"ContainerStarted","Data":"2c007f1b37e1a4b2a647f73c675579904e0c29c7b18540ff7834e371e3714b55"} Mar 09 19:01:11 crc kubenswrapper[4821]: I0309 19:01:11.443405 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" event={"ID":"d84cf524-375a-4f76-98af-e8df84af5bce","Type":"ContainerStarted","Data":"aab067782d70fe140377eaafe7f931f72de3358fbe71eeadc53c619e3bac41fe"} Mar 09 19:01:11 crc kubenswrapper[4821]: I0309 19:01:11.461819 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" podStartSLOduration=1.4617966949999999 podStartE2EDuration="1.461796695s" podCreationTimestamp="2026-03-09 19:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:11.455781542 +0000 UTC m=+2208.617157398" watchObservedRunningTime="2026-03-09 19:01:11.461796695 +0000 UTC m=+2208.623172551" Mar 09 19:01:14 crc kubenswrapper[4821]: I0309 19:01:14.463687 4821 generic.go:334] "Generic (PLEG): container finished" podID="d84cf524-375a-4f76-98af-e8df84af5bce" containerID="2c007f1b37e1a4b2a647f73c675579904e0c29c7b18540ff7834e371e3714b55" exitCode=0 Mar 09 19:01:14 crc kubenswrapper[4821]: I0309 19:01:14.463776 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" event={"ID":"d84cf524-375a-4f76-98af-e8df84af5bce","Type":"ContainerDied","Data":"2c007f1b37e1a4b2a647f73c675579904e0c29c7b18540ff7834e371e3714b55"} Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.475423 4821 generic.go:334] "Generic (PLEG): container finished" podID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" containerID="a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c" exitCode=0 Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.475672 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e","Type":"ContainerDied","Data":"a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c"} Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.602851 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.659492 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84qdd\" (UniqueName: \"kubernetes.io/projected/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-kube-api-access-84qdd\") pod \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.659555 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-combined-ca-bundle\") pod \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.659664 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-logs\") pod \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.659703 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-custom-prometheus-ca\") pod \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.659743 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data\") pod \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.672609 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-logs" (OuterVolumeSpecName: "logs") pod "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" (UID: "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.676259 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-kube-api-access-84qdd" (OuterVolumeSpecName: "kube-api-access-84qdd") pod "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" (UID: "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e"). InnerVolumeSpecName "kube-api-access-84qdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.687076 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" (UID: "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.706025 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" (UID: "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.761702 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data" (OuterVolumeSpecName: "config-data") pod "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" (UID: "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.761902 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data\") pod \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\" (UID: \"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.762209 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84qdd\" (UniqueName: \"kubernetes.io/projected/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-kube-api-access-84qdd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.762228 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.762239 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.762250 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: W0309 19:01:15.762358 4821 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e/volumes/kubernetes.io~secret/config-data Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.762371 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data" (OuterVolumeSpecName: "config-data") pod "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" (UID: "18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.770442 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.862680 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-db-sync-config-data\") pod \"d84cf524-375a-4f76-98af-e8df84af5bce\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.862785 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-combined-ca-bundle\") pod \"d84cf524-375a-4f76-98af-e8df84af5bce\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.862810 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpvxk\" (UniqueName: \"kubernetes.io/projected/d84cf524-375a-4f76-98af-e8df84af5bce-kube-api-access-zpvxk\") pod \"d84cf524-375a-4f76-98af-e8df84af5bce\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.862860 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-config-data\") pod \"d84cf524-375a-4f76-98af-e8df84af5bce\" (UID: \"d84cf524-375a-4f76-98af-e8df84af5bce\") " Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.863218 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.869413 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d84cf524-375a-4f76-98af-e8df84af5bce" (UID: "d84cf524-375a-4f76-98af-e8df84af5bce"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.871164 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84cf524-375a-4f76-98af-e8df84af5bce-kube-api-access-zpvxk" (OuterVolumeSpecName: "kube-api-access-zpvxk") pod "d84cf524-375a-4f76-98af-e8df84af5bce" (UID: "d84cf524-375a-4f76-98af-e8df84af5bce"). InnerVolumeSpecName "kube-api-access-zpvxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.892151 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d84cf524-375a-4f76-98af-e8df84af5bce" (UID: "d84cf524-375a-4f76-98af-e8df84af5bce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.900856 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-config-data" (OuterVolumeSpecName: "config-data") pod "d84cf524-375a-4f76-98af-e8df84af5bce" (UID: "d84cf524-375a-4f76-98af-e8df84af5bce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.965522 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.965569 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.965582 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpvxk\" (UniqueName: \"kubernetes.io/projected/d84cf524-375a-4f76-98af-e8df84af5bce-kube-api-access-zpvxk\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:15 crc kubenswrapper[4821]: I0309 19:01:15.965597 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84cf524-375a-4f76-98af-e8df84af5bce-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.488730 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e","Type":"ContainerDied","Data":"0811823c5b643d4b423178192b26d6fd94b356e500611edf24485e58949c1e96"} Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.488797 4821 scope.go:117] "RemoveContainer" containerID="a64034c9d0cf665f5241dcdbeb42195db11d0c511f475c0a6ef9cd114447dd3c" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.488826 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.497664 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" event={"ID":"d84cf524-375a-4f76-98af-e8df84af5bce","Type":"ContainerDied","Data":"aab067782d70fe140377eaafe7f931f72de3358fbe71eeadc53c619e3bac41fe"} Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.497704 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab067782d70fe140377eaafe7f931f72de3358fbe71eeadc53c619e3bac41fe" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.498080 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.537471 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.557233 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.740663 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: E0309 19:01:16.741160 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" containerName="watcher-decision-engine" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.741187 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" containerName="watcher-decision-engine" Mar 09 19:01:16 crc kubenswrapper[4821]: E0309 19:01:16.741241 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84cf524-375a-4f76-98af-e8df84af5bce" containerName="watcher-kuttl-db-sync" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.741254 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84cf524-375a-4f76-98af-e8df84af5bce" containerName="watcher-kuttl-db-sync" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.741589 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84cf524-375a-4f76-98af-e8df84af5bce" containerName="watcher-kuttl-db-sync" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.741621 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" containerName="watcher-decision-engine" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.743082 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.746565 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-pgq5g" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.746717 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.753587 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.759855 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.760786 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.799429 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.817554 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.890448 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.892143 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900726 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900725 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900831 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900861 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d127e3-eae7-40d5-a478-56998172856d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900890 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900908 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4dq4\" (UniqueName: \"kubernetes.io/projected/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-kube-api-access-n4dq4\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900922 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.900947 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rt65\" (UniqueName: \"kubernetes.io/projected/d2d127e3-eae7-40d5-a478-56998172856d-kube-api-access-8rt65\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.901011 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:16 crc kubenswrapper[4821]: I0309 19:01:16.901361 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002694 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002754 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql54d\" (UniqueName: \"kubernetes.io/projected/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-kube-api-access-ql54d\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002791 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rt65\" (UniqueName: \"kubernetes.io/projected/d2d127e3-eae7-40d5-a478-56998172856d-kube-api-access-8rt65\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002846 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002874 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002904 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002934 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.002994 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003054 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003088 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d127e3-eae7-40d5-a478-56998172856d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003347 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003390 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4dq4\" (UniqueName: \"kubernetes.io/projected/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-kube-api-access-n4dq4\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003414 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003493 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.003558 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d127e3-eae7-40d5-a478-56998172856d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.007028 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.007876 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.011867 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.015105 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.023398 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4dq4\" (UniqueName: \"kubernetes.io/projected/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-kube-api-access-n4dq4\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.023433 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.023913 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rt65\" (UniqueName: \"kubernetes.io/projected/d2d127e3-eae7-40d5-a478-56998172856d-kube-api-access-8rt65\") pod \"watcher-kuttl-applier-0\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.104968 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.105069 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.105127 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.105152 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql54d\" (UniqueName: \"kubernetes.io/projected/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-kube-api-access-ql54d\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.105255 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.105496 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.108268 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.109049 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.109281 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.110361 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.116800 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.138796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql54d\" (UniqueName: \"kubernetes.io/projected/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-kube-api-access-ql54d\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.217658 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.564391 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e" path="/var/lib/kubelet/pods/18c92d7c-9dd7-43eb-98c2-15f30ac6bc7e/volumes" Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.618974 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.680567 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:17 crc kubenswrapper[4821]: I0309 19:01:17.691750 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.525745 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d2d127e3-eae7-40d5-a478-56998172856d","Type":"ContainerStarted","Data":"714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.526070 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d2d127e3-eae7-40d5-a478-56998172856d","Type":"ContainerStarted","Data":"0c43eaeebf62c9536dba9c79eb235ab9885b654685d86fb395de3c97fda0e661"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.531626 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5","Type":"ContainerStarted","Data":"515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.531738 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5","Type":"ContainerStarted","Data":"cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.531773 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5","Type":"ContainerStarted","Data":"7aabf441a81a1d4db5c933e20a8aae0111373d69fcc81dfe112ebcce0a4db7e6"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.531859 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.536783 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f","Type":"ContainerStarted","Data":"fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.536820 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f","Type":"ContainerStarted","Data":"5c8191f93acb0af6fd0897ceea93aaf91debbcba971c85b36a0131b86d2f7982"} Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.546776 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.546759933 podStartE2EDuration="2.546759933s" podCreationTimestamp="2026-03-09 19:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:18.545674274 +0000 UTC m=+2215.707050130" watchObservedRunningTime="2026-03-09 19:01:18.546759933 +0000 UTC m=+2215.708135789" Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.567234 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.5672163980000002 podStartE2EDuration="2.567216398s" podCreationTimestamp="2026-03-09 19:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:18.56179092 +0000 UTC m=+2215.723166776" watchObservedRunningTime="2026-03-09 19:01:18.567216398 +0000 UTC m=+2215.728592264" Mar 09 19:01:18 crc kubenswrapper[4821]: I0309 19:01:18.586929 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.5869121919999998 podStartE2EDuration="2.586912192s" podCreationTimestamp="2026-03-09 19:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:18.581920386 +0000 UTC m=+2215.743296232" watchObservedRunningTime="2026-03-09 19:01:18.586912192 +0000 UTC m=+2215.748288048" Mar 09 19:01:20 crc kubenswrapper[4821]: I0309 19:01:20.943643 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:22 crc kubenswrapper[4821]: I0309 19:01:22.109407 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:22 crc kubenswrapper[4821]: I0309 19:01:22.117867 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.109128 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.117051 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.120567 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.147148 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.218573 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.245214 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.628711 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.633380 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.654492 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:27 crc kubenswrapper[4821]: I0309 19:01:27.682262 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.633760 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.635204 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-central-agent" containerID="cri-o://e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264" gracePeriod=30 Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.635288 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="proxy-httpd" containerID="cri-o://14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118" gracePeriod=30 Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.635288 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-notification-agent" containerID="cri-o://f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5" gracePeriod=30 Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.635270 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="sg-core" containerID="cri-o://8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3" gracePeriod=30 Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.644336 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.162:3000/\": read tcp 10.217.0.2:57988->10.217.0.162:3000: read: connection reset by peer" Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.964523 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7"] Mar 09 19:01:29 crc kubenswrapper[4821]: I0309 19:01:29.971581 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfmx7"] Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.029376 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherb4b0-account-delete-hvmwg"] Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.032009 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.035496 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.049617 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherb4b0-account-delete-hvmwg"] Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.095947 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.096157 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d2d127e3-eae7-40d5-a478-56998172856d" containerName="watcher-applier" containerID="cri-o://714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0" gracePeriod=30 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.136962 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.137181 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-kuttl-api-log" containerID="cri-o://cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0" gracePeriod=30 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.137655 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-api" containerID="cri-o://515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a" gracePeriod=30 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.179703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-operator-scripts\") pod \"watcherb4b0-account-delete-hvmwg\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.179762 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87d4q\" (UniqueName: \"kubernetes.io/projected/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-kube-api-access-87d4q\") pod \"watcherb4b0-account-delete-hvmwg\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.280750 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-operator-scripts\") pod \"watcherb4b0-account-delete-hvmwg\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.280800 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87d4q\" (UniqueName: \"kubernetes.io/projected/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-kube-api-access-87d4q\") pod \"watcherb4b0-account-delete-hvmwg\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.281739 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-operator-scripts\") pod \"watcherb4b0-account-delete-hvmwg\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.301934 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87d4q\" (UniqueName: \"kubernetes.io/projected/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-kube-api-access-87d4q\") pod \"watcherb4b0-account-delete-hvmwg\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.368876 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.699688 4821 generic.go:334] "Generic (PLEG): container finished" podID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerID="cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0" exitCode=143 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.699792 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5","Type":"ContainerDied","Data":"cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0"} Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.728729 4821 generic.go:334] "Generic (PLEG): container finished" podID="44839807-0de1-41d0-9924-8046fe85f1ba" containerID="14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118" exitCode=0 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.728775 4821 generic.go:334] "Generic (PLEG): container finished" podID="44839807-0de1-41d0-9924-8046fe85f1ba" containerID="8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3" exitCode=2 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.728783 4821 generic.go:334] "Generic (PLEG): container finished" podID="44839807-0de1-41d0-9924-8046fe85f1ba" containerID="e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264" exitCode=0 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.729018 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" containerName="watcher-decision-engine" containerID="cri-o://fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411" gracePeriod=30 Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.729345 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerDied","Data":"14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118"} Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.729377 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerDied","Data":"8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3"} Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.729391 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerDied","Data":"e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264"} Mar 09 19:01:30 crc kubenswrapper[4821]: I0309 19:01:30.929093 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherb4b0-account-delete-hvmwg"] Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.551496 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.563571 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84cf524-375a-4f76-98af-e8df84af5bce" path="/var/lib/kubelet/pods/d84cf524-375a-4f76-98af-e8df84af5bce/volumes" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.567656 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.707871 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-log-httpd\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708024 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-run-httpd\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708061 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j55zc\" (UniqueName: \"kubernetes.io/projected/44839807-0de1-41d0-9924-8046fe85f1ba-kube-api-access-j55zc\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708092 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-config-data\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-logs\") pod \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708157 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-combined-ca-bundle\") pod \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708177 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-combined-ca-bundle\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708220 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-scripts\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708245 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-config-data\") pod \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708266 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-custom-prometheus-ca\") pod \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708299 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-ceilometer-tls-certs\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708339 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4dq4\" (UniqueName: \"kubernetes.io/projected/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-kube-api-access-n4dq4\") pod \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\" (UID: \"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708375 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-sg-core-conf-yaml\") pod \"44839807-0de1-41d0-9924-8046fe85f1ba\" (UID: \"44839807-0de1-41d0-9924-8046fe85f1ba\") " Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708370 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708619 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708724 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708737 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44839807-0de1-41d0-9924-8046fe85f1ba-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.708909 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-logs" (OuterVolumeSpecName: "logs") pod "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" (UID: "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.713525 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44839807-0de1-41d0-9924-8046fe85f1ba-kube-api-access-j55zc" (OuterVolumeSpecName: "kube-api-access-j55zc") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "kube-api-access-j55zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.713690 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-scripts" (OuterVolumeSpecName: "scripts") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.715464 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-kube-api-access-n4dq4" (OuterVolumeSpecName: "kube-api-access-n4dq4") pod "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" (UID: "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5"). InnerVolumeSpecName "kube-api-access-n4dq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.736607 4821 generic.go:334] "Generic (PLEG): container finished" podID="c4ae3f52-cc09-4f71-8acf-664c5c9171d2" containerID="ac6eb78880d48e7b7279e7cf48b50a405c8dd3d505267ccda45204365c9f3d51" exitCode=0 Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.736663 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" event={"ID":"c4ae3f52-cc09-4f71-8acf-664c5c9171d2","Type":"ContainerDied","Data":"ac6eb78880d48e7b7279e7cf48b50a405c8dd3d505267ccda45204365c9f3d51"} Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.736687 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" event={"ID":"c4ae3f52-cc09-4f71-8acf-664c5c9171d2","Type":"ContainerStarted","Data":"d88c9fcc5d3c5017f910f8bc2888858c5c08eb8b306cde2b527d520140ff0ece"} Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.738924 4821 generic.go:334] "Generic (PLEG): container finished" podID="44839807-0de1-41d0-9924-8046fe85f1ba" containerID="f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5" exitCode=0 Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.738975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerDied","Data":"f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5"} Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.739000 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"44839807-0de1-41d0-9924-8046fe85f1ba","Type":"ContainerDied","Data":"b24b9b4c141f50c6ee89851e02665117755424b26f559a2eac3cf3f820071d77"} Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.739015 4821 scope.go:117] "RemoveContainer" containerID="14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.739130 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.741595 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" (UID: "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.753717 4821 generic.go:334] "Generic (PLEG): container finished" podID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerID="515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a" exitCode=0 Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.753750 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5","Type":"ContainerDied","Data":"515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a"} Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.753769 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5","Type":"ContainerDied","Data":"7aabf441a81a1d4db5c933e20a8aae0111373d69fcc81dfe112ebcce0a4db7e6"} Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.753825 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.780595 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.781754 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" (UID: "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.782583 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-config-data" (OuterVolumeSpecName: "config-data") pod "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" (UID: "b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.787896 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.806600 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.808755 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-config-data" (OuterVolumeSpecName: "config-data") pod "44839807-0de1-41d0-9924-8046fe85f1ba" (UID: "44839807-0de1-41d0-9924-8046fe85f1ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810257 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4dq4\" (UniqueName: \"kubernetes.io/projected/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-kube-api-access-n4dq4\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810298 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810311 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j55zc\" (UniqueName: \"kubernetes.io/projected/44839807-0de1-41d0-9924-8046fe85f1ba-kube-api-access-j55zc\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810350 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810360 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810369 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810377 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810385 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810395 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810422 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.810432 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/44839807-0de1-41d0-9924-8046fe85f1ba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.869821 4821 scope.go:117] "RemoveContainer" containerID="8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.886879 4821 scope.go:117] "RemoveContainer" containerID="f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.903400 4821 scope.go:117] "RemoveContainer" containerID="e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.939628 4821 scope.go:117] "RemoveContainer" containerID="14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118" Mar 09 19:01:31 crc kubenswrapper[4821]: E0309 19:01:31.940051 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118\": container with ID starting with 14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118 not found: ID does not exist" containerID="14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.940100 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118"} err="failed to get container status \"14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118\": rpc error: code = NotFound desc = could not find container \"14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118\": container with ID starting with 14bc531671015a69e6d336242b059bf6d5003a6e486e20210fdd43f0d9234118 not found: ID does not exist" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.940124 4821 scope.go:117] "RemoveContainer" containerID="8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3" Mar 09 19:01:31 crc kubenswrapper[4821]: E0309 19:01:31.940438 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3\": container with ID starting with 8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3 not found: ID does not exist" containerID="8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.940480 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3"} err="failed to get container status \"8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3\": rpc error: code = NotFound desc = could not find container \"8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3\": container with ID starting with 8c462cd2e5fac7e9671e24a12f984daa1724efcf4321754db6c2dac9fc97dcb3 not found: ID does not exist" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.940509 4821 scope.go:117] "RemoveContainer" containerID="f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5" Mar 09 19:01:31 crc kubenswrapper[4821]: E0309 19:01:31.940818 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5\": container with ID starting with f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5 not found: ID does not exist" containerID="f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.940847 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5"} err="failed to get container status \"f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5\": rpc error: code = NotFound desc = could not find container \"f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5\": container with ID starting with f444349d3d64b2b9f38ed6889adb6764967aaec6b53e37e39d8a1f097ac32ee5 not found: ID does not exist" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.940860 4821 scope.go:117] "RemoveContainer" containerID="e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264" Mar 09 19:01:31 crc kubenswrapper[4821]: E0309 19:01:31.941150 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264\": container with ID starting with e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264 not found: ID does not exist" containerID="e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.941179 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264"} err="failed to get container status \"e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264\": rpc error: code = NotFound desc = could not find container \"e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264\": container with ID starting with e4b51fdcb85b3b26494440d1565769f6f52af812095b97419d6d4b3079c19264 not found: ID does not exist" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.941197 4821 scope.go:117] "RemoveContainer" containerID="515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.958039 4821 scope.go:117] "RemoveContainer" containerID="cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.976502 4821 scope.go:117] "RemoveContainer" containerID="515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a" Mar 09 19:01:31 crc kubenswrapper[4821]: E0309 19:01:31.976927 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a\": container with ID starting with 515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a not found: ID does not exist" containerID="515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.976961 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a"} err="failed to get container status \"515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a\": rpc error: code = NotFound desc = could not find container \"515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a\": container with ID starting with 515ca6d0e9378d2d1e19ee9d0b77bccd8816b128e337364da698a6bb14ad9f1a not found: ID does not exist" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.976985 4821 scope.go:117] "RemoveContainer" containerID="cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0" Mar 09 19:01:31 crc kubenswrapper[4821]: E0309 19:01:31.977510 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0\": container with ID starting with cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0 not found: ID does not exist" containerID="cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0" Mar 09 19:01:31 crc kubenswrapper[4821]: I0309 19:01:31.977532 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0"} err="failed to get container status \"cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0\": rpc error: code = NotFound desc = could not find container \"cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0\": container with ID starting with cd7441ee82a19cfe25b072937e1600b604dd8f11169a5e3efb8865670af148e0 not found: ID does not exist" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.078203 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.097197 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109083 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.109482 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-central-agent" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109500 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-central-agent" Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.109517 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-kuttl-api-log" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109525 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-kuttl-api-log" Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.109542 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="proxy-httpd" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109549 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="proxy-httpd" Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.109561 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-notification-agent" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109567 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-notification-agent" Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.109579 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="sg-core" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109585 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="sg-core" Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.109603 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-api" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109609 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-api" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109776 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-notification-agent" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109793 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-api" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109802 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="sg-core" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109814 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="ceilometer-central-agent" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109826 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" containerName="proxy-httpd" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.109837 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" containerName="watcher-kuttl-api-log" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.111431 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.115888 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.120795 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.121111 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.121228 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.121946 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.128133 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.131257 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.131718 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.133763 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:01:32 crc kubenswrapper[4821]: E0309 19:01:32.133846 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d2d127e3-eae7-40d5-a478-56998172856d" containerName="watcher-applier" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216446 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216517 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrqj\" (UniqueName: \"kubernetes.io/projected/8c5476f2-009e-4270-9579-1de380ae27bd-kube-api-access-5wrqj\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216549 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216603 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-config-data\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216630 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-run-httpd\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216648 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-scripts\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216675 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.216705 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-log-httpd\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.318206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.318259 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrqj\" (UniqueName: \"kubernetes.io/projected/8c5476f2-009e-4270-9579-1de380ae27bd-kube-api-access-5wrqj\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.318281 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.318980 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-config-data\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.319010 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-run-httpd\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.319301 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-scripts\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.319352 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.319371 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-log-httpd\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.319548 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-run-httpd\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.319782 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-log-httpd\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.322977 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.324476 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.332000 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-config-data\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.338257 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrqj\" (UniqueName: \"kubernetes.io/projected/8c5476f2-009e-4270-9579-1de380ae27bd-kube-api-access-5wrqj\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.340181 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-scripts\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.342251 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.432916 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.779796 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: I0309 19:01:32.920409 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:32 crc kubenswrapper[4821]: W0309 19:01:32.936256 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c5476f2_009e_4270_9579_1de380ae27bd.slice/crio-fcacd45e2f5d8c9b41ea64b7b777acad0f381444dee18bb19aab999cd3bdfc3c WatchSource:0}: Error finding container fcacd45e2f5d8c9b41ea64b7b777acad0f381444dee18bb19aab999cd3bdfc3c: Status 404 returned error can't find the container with id fcacd45e2f5d8c9b41ea64b7b777acad0f381444dee18bb19aab999cd3bdfc3c Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.109073 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.264397 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87d4q\" (UniqueName: \"kubernetes.io/projected/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-kube-api-access-87d4q\") pod \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.264473 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-operator-scripts\") pod \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\" (UID: \"c4ae3f52-cc09-4f71-8acf-664c5c9171d2\") " Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.265203 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4ae3f52-cc09-4f71-8acf-664c5c9171d2" (UID: "c4ae3f52-cc09-4f71-8acf-664c5c9171d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.271125 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-kube-api-access-87d4q" (OuterVolumeSpecName: "kube-api-access-87d4q") pod "c4ae3f52-cc09-4f71-8acf-664c5c9171d2" (UID: "c4ae3f52-cc09-4f71-8acf-664c5c9171d2"). InnerVolumeSpecName "kube-api-access-87d4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.366402 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87d4q\" (UniqueName: \"kubernetes.io/projected/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-kube-api-access-87d4q\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.366432 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4ae3f52-cc09-4f71-8acf-664c5c9171d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.567236 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44839807-0de1-41d0-9924-8046fe85f1ba" path="/var/lib/kubelet/pods/44839807-0de1-41d0-9924-8046fe85f1ba/volumes" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.568200 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5" path="/var/lib/kubelet/pods/b2718bd5-8edd-4cd4-b796-7c6c0f34a1d5/volumes" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.796157 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" event={"ID":"c4ae3f52-cc09-4f71-8acf-664c5c9171d2","Type":"ContainerDied","Data":"d88c9fcc5d3c5017f910f8bc2888858c5c08eb8b306cde2b527d520140ff0ece"} Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.796188 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d88c9fcc5d3c5017f910f8bc2888858c5c08eb8b306cde2b527d520140ff0ece" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.796241 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherb4b0-account-delete-hvmwg" Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.797685 4821 generic.go:334] "Generic (PLEG): container finished" podID="d2d127e3-eae7-40d5-a478-56998172856d" containerID="714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0" exitCode=0 Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.797725 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d2d127e3-eae7-40d5-a478-56998172856d","Type":"ContainerDied","Data":"714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0"} Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.800520 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerStarted","Data":"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5"} Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.800547 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerStarted","Data":"fcacd45e2f5d8c9b41ea64b7b777acad0f381444dee18bb19aab999cd3bdfc3c"} Mar 09 19:01:33 crc kubenswrapper[4821]: I0309 19:01:33.982356 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.078914 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d127e3-eae7-40d5-a478-56998172856d-logs\") pod \"d2d127e3-eae7-40d5-a478-56998172856d\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.079178 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-config-data\") pod \"d2d127e3-eae7-40d5-a478-56998172856d\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.079278 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rt65\" (UniqueName: \"kubernetes.io/projected/d2d127e3-eae7-40d5-a478-56998172856d-kube-api-access-8rt65\") pod \"d2d127e3-eae7-40d5-a478-56998172856d\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.079327 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-combined-ca-bundle\") pod \"d2d127e3-eae7-40d5-a478-56998172856d\" (UID: \"d2d127e3-eae7-40d5-a478-56998172856d\") " Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.099110 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d127e3-eae7-40d5-a478-56998172856d-logs" (OuterVolumeSpecName: "logs") pod "d2d127e3-eae7-40d5-a478-56998172856d" (UID: "d2d127e3-eae7-40d5-a478-56998172856d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.118686 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d127e3-eae7-40d5-a478-56998172856d-kube-api-access-8rt65" (OuterVolumeSpecName: "kube-api-access-8rt65") pod "d2d127e3-eae7-40d5-a478-56998172856d" (UID: "d2d127e3-eae7-40d5-a478-56998172856d"). InnerVolumeSpecName "kube-api-access-8rt65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.168534 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2d127e3-eae7-40d5-a478-56998172856d" (UID: "d2d127e3-eae7-40d5-a478-56998172856d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.183503 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2d127e3-eae7-40d5-a478-56998172856d-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.183542 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rt65\" (UniqueName: \"kubernetes.io/projected/d2d127e3-eae7-40d5-a478-56998172856d-kube-api-access-8rt65\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.183557 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.201500 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-config-data" (OuterVolumeSpecName: "config-data") pod "d2d127e3-eae7-40d5-a478-56998172856d" (UID: "d2d127e3-eae7-40d5-a478-56998172856d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.284631 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2d127e3-eae7-40d5-a478-56998172856d-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.810621 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerStarted","Data":"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877"} Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.812763 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d2d127e3-eae7-40d5-a478-56998172856d","Type":"ContainerDied","Data":"0c43eaeebf62c9536dba9c79eb235ab9885b654685d86fb395de3c97fda0e661"} Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.812807 4821 scope.go:117] "RemoveContainer" containerID="714371f9e9265a4fc2c699f74397398fa56660489645b5c9671461fe4e56c3a0" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.812857 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.987787 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:34 crc kubenswrapper[4821]: I0309 19:01:34.999471 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.051425 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-k7h6t"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.058601 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.068187 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherb4b0-account-delete-hvmwg"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.076394 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-k7h6t"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.086386 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-b4b0-account-create-update-6w6ll"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.091109 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherb4b0-account-delete-hvmwg"] Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.154274 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.299852 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-config-data\") pod \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.299981 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-logs\") pod \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.300017 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-custom-prometheus-ca\") pod \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.300072 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-combined-ca-bundle\") pod \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.300116 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql54d\" (UniqueName: \"kubernetes.io/projected/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-kube-api-access-ql54d\") pod \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\" (UID: \"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f\") " Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.300372 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-logs" (OuterVolumeSpecName: "logs") pod "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" (UID: "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.300541 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.307231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-kube-api-access-ql54d" (OuterVolumeSpecName: "kube-api-access-ql54d") pod "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" (UID: "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f"). InnerVolumeSpecName "kube-api-access-ql54d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.324911 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" (UID: "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.331206 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" (UID: "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.355816 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-config-data" (OuterVolumeSpecName: "config-data") pod "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" (UID: "b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.401898 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql54d\" (UniqueName: \"kubernetes.io/projected/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-kube-api-access-ql54d\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.401938 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.401955 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.401969 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.561598 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b8a4147-9d76-4a01-93ef-7f8e0652f1f0" path="/var/lib/kubelet/pods/4b8a4147-9d76-4a01-93ef-7f8e0652f1f0/volumes" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.562617 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebf7583-afcc-454d-970d-72dbc3ce7ff9" path="/var/lib/kubelet/pods/bebf7583-afcc-454d-970d-72dbc3ce7ff9/volumes" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.563595 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ae3f52-cc09-4f71-8acf-664c5c9171d2" path="/var/lib/kubelet/pods/c4ae3f52-cc09-4f71-8acf-664c5c9171d2/volumes" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.565566 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d127e3-eae7-40d5-a478-56998172856d" path="/var/lib/kubelet/pods/d2d127e3-eae7-40d5-a478-56998172856d/volumes" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.846172 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerStarted","Data":"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f"} Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.851040 4821 generic.go:334] "Generic (PLEG): container finished" podID="b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" containerID="fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411" exitCode=0 Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.851098 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f","Type":"ContainerDied","Data":"fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411"} Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.851123 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f","Type":"ContainerDied","Data":"5c8191f93acb0af6fd0897ceea93aaf91debbcba971c85b36a0131b86d2f7982"} Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.851139 4821 scope.go:117] "RemoveContainer" containerID="fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.851261 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.890317 4821 scope.go:117] "RemoveContainer" containerID="fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.891055 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:35 crc kubenswrapper[4821]: E0309 19:01:35.894743 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411\": container with ID starting with fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411 not found: ID does not exist" containerID="fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.894790 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411"} err="failed to get container status \"fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411\": rpc error: code = NotFound desc = could not find container \"fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411\": container with ID starting with fb71eaaadc8586b0ebaaedb224e8a987ab78d938324cfb563a3c6f0a17790411 not found: ID does not exist" Mar 09 19:01:35 crc kubenswrapper[4821]: I0309 19:01:35.897916 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.298653 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-vvt6t"] Mar 09 19:01:36 crc kubenswrapper[4821]: E0309 19:01:36.299123 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ae3f52-cc09-4f71-8acf-664c5c9171d2" containerName="mariadb-account-delete" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.299144 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ae3f52-cc09-4f71-8acf-664c5c9171d2" containerName="mariadb-account-delete" Mar 09 19:01:36 crc kubenswrapper[4821]: E0309 19:01:36.299161 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" containerName="watcher-decision-engine" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.299169 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" containerName="watcher-decision-engine" Mar 09 19:01:36 crc kubenswrapper[4821]: E0309 19:01:36.299186 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d127e3-eae7-40d5-a478-56998172856d" containerName="watcher-applier" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.299194 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d127e3-eae7-40d5-a478-56998172856d" containerName="watcher-applier" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.299388 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d127e3-eae7-40d5-a478-56998172856d" containerName="watcher-applier" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.299412 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" containerName="watcher-decision-engine" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.299424 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ae3f52-cc09-4f71-8acf-664c5c9171d2" containerName="mariadb-account-delete" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.300143 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.307508 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7"] Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.308773 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.312438 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.315570 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7"] Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.357456 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vvt6t"] Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.421050 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlpx\" (UniqueName: \"kubernetes.io/projected/c5406c9a-7ea7-491a-b625-af6eaffeeaac-kube-api-access-xrlpx\") pod \"watcher-fac6-account-create-update-gmsz7\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.421395 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5406c9a-7ea7-491a-b625-af6eaffeeaac-operator-scripts\") pod \"watcher-fac6-account-create-update-gmsz7\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.421446 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-operator-scripts\") pod \"watcher-db-create-vvt6t\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.421493 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dz72\" (UniqueName: \"kubernetes.io/projected/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-kube-api-access-4dz72\") pod \"watcher-db-create-vvt6t\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.523072 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlpx\" (UniqueName: \"kubernetes.io/projected/c5406c9a-7ea7-491a-b625-af6eaffeeaac-kube-api-access-xrlpx\") pod \"watcher-fac6-account-create-update-gmsz7\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.523133 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5406c9a-7ea7-491a-b625-af6eaffeeaac-operator-scripts\") pod \"watcher-fac6-account-create-update-gmsz7\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.523171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-operator-scripts\") pod \"watcher-db-create-vvt6t\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.523215 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dz72\" (UniqueName: \"kubernetes.io/projected/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-kube-api-access-4dz72\") pod \"watcher-db-create-vvt6t\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.523945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-operator-scripts\") pod \"watcher-db-create-vvt6t\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.524381 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5406c9a-7ea7-491a-b625-af6eaffeeaac-operator-scripts\") pod \"watcher-fac6-account-create-update-gmsz7\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.542442 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlpx\" (UniqueName: \"kubernetes.io/projected/c5406c9a-7ea7-491a-b625-af6eaffeeaac-kube-api-access-xrlpx\") pod \"watcher-fac6-account-create-update-gmsz7\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.546975 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dz72\" (UniqueName: \"kubernetes.io/projected/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-kube-api-access-4dz72\") pod \"watcher-db-create-vvt6t\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.656818 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:36 crc kubenswrapper[4821]: I0309 19:01:36.665289 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.213870 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7"] Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.298890 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vvt6t"] Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.562211 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f" path="/var/lib/kubelet/pods/b1da02b3-bdf6-4f14-9ab5-d74bcb5c299f/volumes" Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.881949 4821 generic.go:334] "Generic (PLEG): container finished" podID="88ea4a34-8ad2-4c0e-a139-bca978c3da6a" containerID="b82153cb286c7704a8d11c3ac47938c7b821756d4e36949d20e9b1bc9862c504" exitCode=0 Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.882020 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vvt6t" event={"ID":"88ea4a34-8ad2-4c0e-a139-bca978c3da6a","Type":"ContainerDied","Data":"b82153cb286c7704a8d11c3ac47938c7b821756d4e36949d20e9b1bc9862c504"} Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.882051 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vvt6t" event={"ID":"88ea4a34-8ad2-4c0e-a139-bca978c3da6a","Type":"ContainerStarted","Data":"f8fa6e012dd4d203b91cae29467988a862cd6894c8463a5cdd79a145e91ba944"} Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.884341 4821 generic.go:334] "Generic (PLEG): container finished" podID="c5406c9a-7ea7-491a-b625-af6eaffeeaac" containerID="3e70b0c86876adc028a5880426f20f07a247a044079125b2036b3cc0dc880e10" exitCode=0 Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.884389 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" event={"ID":"c5406c9a-7ea7-491a-b625-af6eaffeeaac","Type":"ContainerDied","Data":"3e70b0c86876adc028a5880426f20f07a247a044079125b2036b3cc0dc880e10"} Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.884412 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" event={"ID":"c5406c9a-7ea7-491a-b625-af6eaffeeaac","Type":"ContainerStarted","Data":"5548acdfc4ea4821680b4425c638f8b5753e4bfa87bda9062171f8247d918848"} Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.887926 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerStarted","Data":"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0"} Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.888043 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.888038 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-central-agent" containerID="cri-o://a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" gracePeriod=30 Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.888083 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="sg-core" containerID="cri-o://8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" gracePeriod=30 Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.888115 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-notification-agent" containerID="cri-o://319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" gracePeriod=30 Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.888067 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="proxy-httpd" containerID="cri-o://d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" gracePeriod=30 Mar 09 19:01:37 crc kubenswrapper[4821]: I0309 19:01:37.940155 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.73012005 podStartE2EDuration="5.940138683s" podCreationTimestamp="2026-03-09 19:01:32 +0000 UTC" firstStartedPulling="2026-03-09 19:01:32.939436553 +0000 UTC m=+2230.100812409" lastFinishedPulling="2026-03-09 19:01:37.149455186 +0000 UTC m=+2234.310831042" observedRunningTime="2026-03-09 19:01:37.937914652 +0000 UTC m=+2235.099290508" watchObservedRunningTime="2026-03-09 19:01:37.940138683 +0000 UTC m=+2235.101514539" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.703916 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865282 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-combined-ca-bundle\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865362 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-run-httpd\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865391 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-log-httpd\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865413 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-scripts\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865469 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-ceilometer-tls-certs\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865518 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wrqj\" (UniqueName: \"kubernetes.io/projected/8c5476f2-009e-4270-9579-1de380ae27bd-kube-api-access-5wrqj\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865651 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-config-data\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.865702 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-sg-core-conf-yaml\") pod \"8c5476f2-009e-4270-9579-1de380ae27bd\" (UID: \"8c5476f2-009e-4270-9579-1de380ae27bd\") " Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.866348 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.866764 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.875830 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-scripts" (OuterVolumeSpecName: "scripts") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.877603 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5476f2-009e-4270-9579-1de380ae27bd-kube-api-access-5wrqj" (OuterVolumeSpecName: "kube-api-access-5wrqj") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "kube-api-access-5wrqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.905332 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916159 4821 generic.go:334] "Generic (PLEG): container finished" podID="8c5476f2-009e-4270-9579-1de380ae27bd" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" exitCode=0 Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916239 4821 generic.go:334] "Generic (PLEG): container finished" podID="8c5476f2-009e-4270-9579-1de380ae27bd" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" exitCode=2 Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916251 4821 generic.go:334] "Generic (PLEG): container finished" podID="8c5476f2-009e-4270-9579-1de380ae27bd" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" exitCode=0 Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916262 4821 generic.go:334] "Generic (PLEG): container finished" podID="8c5476f2-009e-4270-9579-1de380ae27bd" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" exitCode=0 Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916514 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916516 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerDied","Data":"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0"} Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916602 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerDied","Data":"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f"} Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916618 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerDied","Data":"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877"} Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916630 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerDied","Data":"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5"} Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916641 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8c5476f2-009e-4270-9579-1de380ae27bd","Type":"ContainerDied","Data":"fcacd45e2f5d8c9b41ea64b7b777acad0f381444dee18bb19aab999cd3bdfc3c"} Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.916655 4821 scope.go:117] "RemoveContainer" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.925693 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.957704 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.958616 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-config-data" (OuterVolumeSpecName: "config-data") pod "8c5476f2-009e-4270-9579-1de380ae27bd" (UID: "8c5476f2-009e-4270-9579-1de380ae27bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967802 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967838 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967851 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967862 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c5476f2-009e-4270-9579-1de380ae27bd-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967872 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967883 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967893 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wrqj\" (UniqueName: \"kubernetes.io/projected/8c5476f2-009e-4270-9579-1de380ae27bd-kube-api-access-5wrqj\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:38 crc kubenswrapper[4821]: I0309 19:01:38.967905 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5476f2-009e-4270-9579-1de380ae27bd-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.011519 4821 scope.go:117] "RemoveContainer" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.040080 4821 scope.go:117] "RemoveContainer" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.068124 4821 scope.go:117] "RemoveContainer" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.091735 4821 scope.go:117] "RemoveContainer" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.092209 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": container with ID starting with d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0 not found: ID does not exist" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.092244 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0"} err="failed to get container status \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": rpc error: code = NotFound desc = could not find container \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": container with ID starting with d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.092271 4821 scope.go:117] "RemoveContainer" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.092746 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": container with ID starting with 8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f not found: ID does not exist" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.092788 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f"} err="failed to get container status \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": rpc error: code = NotFound desc = could not find container \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": container with ID starting with 8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.092816 4821 scope.go:117] "RemoveContainer" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.093119 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": container with ID starting with 319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877 not found: ID does not exist" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.093280 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877"} err="failed to get container status \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": rpc error: code = NotFound desc = could not find container \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": container with ID starting with 319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.093301 4821 scope.go:117] "RemoveContainer" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.093709 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": container with ID starting with a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5 not found: ID does not exist" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.093735 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5"} err="failed to get container status \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": rpc error: code = NotFound desc = could not find container \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": container with ID starting with a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.093751 4821 scope.go:117] "RemoveContainer" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.093947 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0"} err="failed to get container status \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": rpc error: code = NotFound desc = could not find container \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": container with ID starting with d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.093975 4821 scope.go:117] "RemoveContainer" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.094143 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f"} err="failed to get container status \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": rpc error: code = NotFound desc = could not find container \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": container with ID starting with 8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.094163 4821 scope.go:117] "RemoveContainer" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.094421 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877"} err="failed to get container status \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": rpc error: code = NotFound desc = could not find container \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": container with ID starting with 319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.094454 4821 scope.go:117] "RemoveContainer" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.094729 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5"} err="failed to get container status \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": rpc error: code = NotFound desc = could not find container \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": container with ID starting with a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.094748 4821 scope.go:117] "RemoveContainer" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095079 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0"} err="failed to get container status \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": rpc error: code = NotFound desc = could not find container \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": container with ID starting with d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095100 4821 scope.go:117] "RemoveContainer" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095399 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f"} err="failed to get container status \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": rpc error: code = NotFound desc = could not find container \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": container with ID starting with 8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095426 4821 scope.go:117] "RemoveContainer" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095601 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877"} err="failed to get container status \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": rpc error: code = NotFound desc = could not find container \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": container with ID starting with 319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095624 4821 scope.go:117] "RemoveContainer" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095932 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5"} err="failed to get container status \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": rpc error: code = NotFound desc = could not find container \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": container with ID starting with a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.095956 4821 scope.go:117] "RemoveContainer" containerID="d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.096309 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0"} err="failed to get container status \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": rpc error: code = NotFound desc = could not find container \"d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0\": container with ID starting with d4e2d99cef295e0461c1a72b1513c89e960f4df8c282346c78880b27751cfda0 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.096352 4821 scope.go:117] "RemoveContainer" containerID="8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.096612 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f"} err="failed to get container status \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": rpc error: code = NotFound desc = could not find container \"8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f\": container with ID starting with 8544e8b7a7cf05510fecdee751551ed7da644a8057a235c15c3664029c6d0d5f not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.096629 4821 scope.go:117] "RemoveContainer" containerID="319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.096851 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877"} err="failed to get container status \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": rpc error: code = NotFound desc = could not find container \"319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877\": container with ID starting with 319f1f6d5f20b970a4c234e82900fe3ec7dedd3e3f5e5460cffe032ce8d67877 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.096871 4821 scope.go:117] "RemoveContainer" containerID="a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.097218 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5"} err="failed to get container status \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": rpc error: code = NotFound desc = could not find container \"a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5\": container with ID starting with a4673eb3e335ffe2f276cb913ead91bb9eda689466e633c5fb0e10a193d0f3d5 not found: ID does not exist" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.265479 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.279021 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.283095 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.307465 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.307933 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-notification-agent" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308015 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-notification-agent" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.308037 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-central-agent" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308074 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-central-agent" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.308089 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="proxy-httpd" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308098 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="proxy-httpd" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.308110 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ea4a34-8ad2-4c0e-a139-bca978c3da6a" containerName="mariadb-database-create" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308119 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ea4a34-8ad2-4c0e-a139-bca978c3da6a" containerName="mariadb-database-create" Mar 09 19:01:39 crc kubenswrapper[4821]: E0309 19:01:39.308135 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="sg-core" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308142 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="sg-core" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308453 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-notification-agent" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308473 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="sg-core" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308486 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="ceilometer-central-agent" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308495 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" containerName="proxy-httpd" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.308514 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ea4a34-8ad2-4c0e-a139-bca978c3da6a" containerName="mariadb-database-create" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.310297 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.316159 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.319674 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.319845 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.320150 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.328788 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.374631 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dz72\" (UniqueName: \"kubernetes.io/projected/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-kube-api-access-4dz72\") pod \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.374809 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-operator-scripts\") pod \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\" (UID: \"88ea4a34-8ad2-4c0e-a139-bca978c3da6a\") " Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.375549 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88ea4a34-8ad2-4c0e-a139-bca978c3da6a" (UID: "88ea4a34-8ad2-4c0e-a139-bca978c3da6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.378104 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-kube-api-access-4dz72" (OuterVolumeSpecName: "kube-api-access-4dz72") pod "88ea4a34-8ad2-4c0e-a139-bca978c3da6a" (UID: "88ea4a34-8ad2-4c0e-a139-bca978c3da6a"). InnerVolumeSpecName "kube-api-access-4dz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.476751 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5406c9a-7ea7-491a-b625-af6eaffeeaac-operator-scripts\") pod \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.476910 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlpx\" (UniqueName: \"kubernetes.io/projected/c5406c9a-7ea7-491a-b625-af6eaffeeaac-kube-api-access-xrlpx\") pod \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\" (UID: \"c5406c9a-7ea7-491a-b625-af6eaffeeaac\") " Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477114 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-scripts\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477157 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-log-httpd\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477172 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-config-data\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477186 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-run-httpd\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477196 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5406c9a-7ea7-491a-b625-af6eaffeeaac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5406c9a-7ea7-491a-b625-af6eaffeeaac" (UID: "c5406c9a-7ea7-491a-b625-af6eaffeeaac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477220 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477389 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477422 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477473 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khg9\" (UniqueName: \"kubernetes.io/projected/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-kube-api-access-4khg9\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477614 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dz72\" (UniqueName: \"kubernetes.io/projected/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-kube-api-access-4dz72\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477652 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88ea4a34-8ad2-4c0e-a139-bca978c3da6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.477663 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5406c9a-7ea7-491a-b625-af6eaffeeaac-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.480441 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5406c9a-7ea7-491a-b625-af6eaffeeaac-kube-api-access-xrlpx" (OuterVolumeSpecName: "kube-api-access-xrlpx") pod "c5406c9a-7ea7-491a-b625-af6eaffeeaac" (UID: "c5406c9a-7ea7-491a-b625-af6eaffeeaac"). InnerVolumeSpecName "kube-api-access-xrlpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.564359 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5476f2-009e-4270-9579-1de380ae27bd" path="/var/lib/kubelet/pods/8c5476f2-009e-4270-9579-1de380ae27bd/volumes" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.578986 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-scripts\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579066 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-log-httpd\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579086 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-config-data\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579105 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-run-httpd\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579153 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579252 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579289 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4khg9\" (UniqueName: \"kubernetes.io/projected/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-kube-api-access-4khg9\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579370 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrlpx\" (UniqueName: \"kubernetes.io/projected/c5406c9a-7ea7-491a-b625-af6eaffeeaac-kube-api-access-xrlpx\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.579717 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-log-httpd\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.580602 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-run-httpd\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.583656 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-scripts\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.585796 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-config-data\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.586510 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.587858 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.597173 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.604772 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4khg9\" (UniqueName: \"kubernetes.io/projected/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-kube-api-access-4khg9\") pod \"ceilometer-0\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.651194 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.928582 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vvt6t" event={"ID":"88ea4a34-8ad2-4c0e-a139-bca978c3da6a","Type":"ContainerDied","Data":"f8fa6e012dd4d203b91cae29467988a862cd6894c8463a5cdd79a145e91ba944"} Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.928814 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8fa6e012dd4d203b91cae29467988a862cd6894c8463a5cdd79a145e91ba944" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.928623 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vvt6t" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.931264 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.931250 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7" event={"ID":"c5406c9a-7ea7-491a-b625-af6eaffeeaac","Type":"ContainerDied","Data":"5548acdfc4ea4821680b4425c638f8b5753e4bfa87bda9062171f8247d918848"} Mar 09 19:01:39 crc kubenswrapper[4821]: I0309 19:01:39.931376 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5548acdfc4ea4821680b4425c638f8b5753e4bfa87bda9062171f8247d918848" Mar 09 19:01:40 crc kubenswrapper[4821]: I0309 19:01:40.149627 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:40 crc kubenswrapper[4821]: I0309 19:01:40.948479 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerStarted","Data":"a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76"} Mar 09 19:01:40 crc kubenswrapper[4821]: I0309 19:01:40.948847 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerStarted","Data":"0a6d008f7d1cf38b2e8e32e5a7186e8dc6bfb7fce23ff07e18c72c0abf6e15c9"} Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.660198 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-522pz"] Mar 09 19:01:41 crc kubenswrapper[4821]: E0309 19:01:41.660762 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5406c9a-7ea7-491a-b625-af6eaffeeaac" containerName="mariadb-account-create-update" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.660779 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5406c9a-7ea7-491a-b625-af6eaffeeaac" containerName="mariadb-account-create-update" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.660928 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5406c9a-7ea7-491a-b625-af6eaffeeaac" containerName="mariadb-account-create-update" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.661500 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.664206 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.664542 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rhkbv" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.708614 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-522pz"] Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.809587 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.809638 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65dqk\" (UniqueName: \"kubernetes.io/projected/e29c14d1-d5d2-413a-bae1-b117c9858d96-kube-api-access-65dqk\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.809696 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-db-sync-config-data\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.809737 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-config-data\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.911150 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-config-data\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.911240 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.911291 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65dqk\" (UniqueName: \"kubernetes.io/projected/e29c14d1-d5d2-413a-bae1-b117c9858d96-kube-api-access-65dqk\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.914613 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-db-sync-config-data\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.928622 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.929913 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-db-sync-config-data\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.931180 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65dqk\" (UniqueName: \"kubernetes.io/projected/e29c14d1-d5d2-413a-bae1-b117c9858d96-kube-api-access-65dqk\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.958915 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-config-data\") pod \"watcher-kuttl-db-sync-522pz\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.968916 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerStarted","Data":"48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8"} Mar 09 19:01:41 crc kubenswrapper[4821]: I0309 19:01:41.976523 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:42 crc kubenswrapper[4821]: W0309 19:01:42.478625 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode29c14d1_d5d2_413a_bae1_b117c9858d96.slice/crio-10b3b6287547fbb3331932b38784aefe97e53d988e4af2239269ae67452db08d WatchSource:0}: Error finding container 10b3b6287547fbb3331932b38784aefe97e53d988e4af2239269ae67452db08d: Status 404 returned error can't find the container with id 10b3b6287547fbb3331932b38784aefe97e53d988e4af2239269ae67452db08d Mar 09 19:01:42 crc kubenswrapper[4821]: I0309 19:01:42.487678 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-522pz"] Mar 09 19:01:42 crc kubenswrapper[4821]: I0309 19:01:42.985194 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" event={"ID":"e29c14d1-d5d2-413a-bae1-b117c9858d96","Type":"ContainerStarted","Data":"702cf48c91f5529fc8ace20857452dfb5eacdd69ceeee8afb1819df3c14c953c"} Mar 09 19:01:42 crc kubenswrapper[4821]: I0309 19:01:42.985608 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" event={"ID":"e29c14d1-d5d2-413a-bae1-b117c9858d96","Type":"ContainerStarted","Data":"10b3b6287547fbb3331932b38784aefe97e53d988e4af2239269ae67452db08d"} Mar 09 19:01:42 crc kubenswrapper[4821]: I0309 19:01:42.987729 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerStarted","Data":"f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2"} Mar 09 19:01:43 crc kubenswrapper[4821]: I0309 19:01:43.005270 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" podStartSLOduration=2.0052507999999998 podStartE2EDuration="2.0052508s" podCreationTimestamp="2026-03-09 19:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:42.998243379 +0000 UTC m=+2240.159619265" watchObservedRunningTime="2026-03-09 19:01:43.0052508 +0000 UTC m=+2240.166626656" Mar 09 19:01:45 crc kubenswrapper[4821]: I0309 19:01:45.004706 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerStarted","Data":"4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33"} Mar 09 19:01:45 crc kubenswrapper[4821]: I0309 19:01:45.005059 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:01:45 crc kubenswrapper[4821]: I0309 19:01:45.006720 4821 generic.go:334] "Generic (PLEG): container finished" podID="e29c14d1-d5d2-413a-bae1-b117c9858d96" containerID="702cf48c91f5529fc8ace20857452dfb5eacdd69ceeee8afb1819df3c14c953c" exitCode=0 Mar 09 19:01:45 crc kubenswrapper[4821]: I0309 19:01:45.006779 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" event={"ID":"e29c14d1-d5d2-413a-bae1-b117c9858d96","Type":"ContainerDied","Data":"702cf48c91f5529fc8ace20857452dfb5eacdd69ceeee8afb1819df3c14c953c"} Mar 09 19:01:45 crc kubenswrapper[4821]: I0309 19:01:45.035661 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.357780584 podStartE2EDuration="6.035641557s" podCreationTimestamp="2026-03-09 19:01:39 +0000 UTC" firstStartedPulling="2026-03-09 19:01:40.150451582 +0000 UTC m=+2237.311827438" lastFinishedPulling="2026-03-09 19:01:43.828312545 +0000 UTC m=+2240.989688411" observedRunningTime="2026-03-09 19:01:45.030030215 +0000 UTC m=+2242.191406071" watchObservedRunningTime="2026-03-09 19:01:45.035641557 +0000 UTC m=+2242.197017423" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.391262 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.430313 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-config-data\") pod \"e29c14d1-d5d2-413a-bae1-b117c9858d96\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.430403 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65dqk\" (UniqueName: \"kubernetes.io/projected/e29c14d1-d5d2-413a-bae1-b117c9858d96-kube-api-access-65dqk\") pod \"e29c14d1-d5d2-413a-bae1-b117c9858d96\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.430443 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-combined-ca-bundle\") pod \"e29c14d1-d5d2-413a-bae1-b117c9858d96\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.430506 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-db-sync-config-data\") pod \"e29c14d1-d5d2-413a-bae1-b117c9858d96\" (UID: \"e29c14d1-d5d2-413a-bae1-b117c9858d96\") " Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.436665 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29c14d1-d5d2-413a-bae1-b117c9858d96-kube-api-access-65dqk" (OuterVolumeSpecName: "kube-api-access-65dqk") pod "e29c14d1-d5d2-413a-bae1-b117c9858d96" (UID: "e29c14d1-d5d2-413a-bae1-b117c9858d96"). InnerVolumeSpecName "kube-api-access-65dqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.449262 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e29c14d1-d5d2-413a-bae1-b117c9858d96" (UID: "e29c14d1-d5d2-413a-bae1-b117c9858d96"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.459576 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e29c14d1-d5d2-413a-bae1-b117c9858d96" (UID: "e29c14d1-d5d2-413a-bae1-b117c9858d96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.503362 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-config-data" (OuterVolumeSpecName: "config-data") pod "e29c14d1-d5d2-413a-bae1-b117c9858d96" (UID: "e29c14d1-d5d2-413a-bae1-b117c9858d96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.532051 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65dqk\" (UniqueName: \"kubernetes.io/projected/e29c14d1-d5d2-413a-bae1-b117c9858d96-kube-api-access-65dqk\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.532283 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.532356 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:46 crc kubenswrapper[4821]: I0309 19:01:46.532422 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e29c14d1-d5d2-413a-bae1-b117c9858d96-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.029296 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" event={"ID":"e29c14d1-d5d2-413a-bae1-b117c9858d96","Type":"ContainerDied","Data":"10b3b6287547fbb3331932b38784aefe97e53d988e4af2239269ae67452db08d"} Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.029535 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b3b6287547fbb3331932b38784aefe97e53d988e4af2239269ae67452db08d" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.029642 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-522pz" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.395664 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:47 crc kubenswrapper[4821]: E0309 19:01:47.395991 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29c14d1-d5d2-413a-bae1-b117c9858d96" containerName="watcher-kuttl-db-sync" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.396002 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29c14d1-d5d2-413a-bae1-b117c9858d96" containerName="watcher-kuttl-db-sync" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.396144 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29c14d1-d5d2-413a-bae1-b117c9858d96" containerName="watcher-kuttl-db-sync" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.396651 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.401571 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rhkbv" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.401711 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.414794 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.416116 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.419149 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.427561 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.429151 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.431751 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.436287 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.444480 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.461337 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547678 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547722 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547759 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547861 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95ec74e-8a1f-447e-89a9-747e843a1ce3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547917 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hwt\" (UniqueName: \"kubernetes.io/projected/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-kube-api-access-j7hwt\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547941 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.547997 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdsq\" (UniqueName: \"kubernetes.io/projected/f95ec74e-8a1f-447e-89a9-747e843a1ce3-kube-api-access-ktdsq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548037 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548068 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrf49\" (UniqueName: \"kubernetes.io/projected/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-kube-api-access-xrf49\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548098 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548132 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548190 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548250 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.548285 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.649755 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.650137 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.650214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdsq\" (UniqueName: \"kubernetes.io/projected/f95ec74e-8a1f-447e-89a9-747e843a1ce3-kube-api-access-ktdsq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.650568 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.650595 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrf49\" (UniqueName: \"kubernetes.io/projected/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-kube-api-access-xrf49\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651139 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651175 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651221 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651254 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651273 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651354 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651378 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651409 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651460 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95ec74e-8a1f-447e-89a9-747e843a1ce3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651488 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hwt\" (UniqueName: \"kubernetes.io/projected/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-kube-api-access-j7hwt\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.651930 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95ec74e-8a1f-447e-89a9-747e843a1ce3-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.652015 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.655968 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.663015 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.663528 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.663604 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.663961 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.664090 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.664956 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.675042 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrf49\" (UniqueName: \"kubernetes.io/projected/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-kube-api-access-xrf49\") pod \"watcher-kuttl-api-0\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.678945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.680986 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hwt\" (UniqueName: \"kubernetes.io/projected/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-kube-api-access-j7hwt\") pod \"watcher-kuttl-applier-0\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.681341 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdsq\" (UniqueName: \"kubernetes.io/projected/f95ec74e-8a1f-447e-89a9-747e843a1ce3-kube-api-access-ktdsq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.712649 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.738528 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:47 crc kubenswrapper[4821]: I0309 19:01:47.758946 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:48 crc kubenswrapper[4821]: I0309 19:01:48.211732 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:01:48 crc kubenswrapper[4821]: I0309 19:01:48.337881 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:01:48 crc kubenswrapper[4821]: I0309 19:01:48.349670 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:01:48 crc kubenswrapper[4821]: W0309 19:01:48.354483 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode10f4d2b_6099_4cdd_8f41_d3ad88ea2a64.slice/crio-90aff8914b3964c205ab9fa48c0153082d676fc90efc423a1ef24fea7a209362 WatchSource:0}: Error finding container 90aff8914b3964c205ab9fa48c0153082d676fc90efc423a1ef24fea7a209362: Status 404 returned error can't find the container with id 90aff8914b3964c205ab9fa48c0153082d676fc90efc423a1ef24fea7a209362 Mar 09 19:01:48 crc kubenswrapper[4821]: W0309 19:01:48.358962 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf95ec74e_8a1f_447e_89a9_747e843a1ce3.slice/crio-23422a7b75503dd397ffe6b0489311f0675d73c1a0af64202ebbd0e82326c27c WatchSource:0}: Error finding container 23422a7b75503dd397ffe6b0489311f0675d73c1a0af64202ebbd0e82326c27c: Status 404 returned error can't find the container with id 23422a7b75503dd397ffe6b0489311f0675d73c1a0af64202ebbd0e82326c27c Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.047487 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6","Type":"ContainerStarted","Data":"c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.047931 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6","Type":"ContainerStarted","Data":"c79c373f567a9e0db1bdc24f59329b3b559fe9e8e83029ba3d33a9d781a24738"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.049539 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f95ec74e-8a1f-447e-89a9-747e843a1ce3","Type":"ContainerStarted","Data":"2e4355930ab2401a750507a2f552335b07ac67c99f24ab0da89c8d75c3e3a0d6"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.049579 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f95ec74e-8a1f-447e-89a9-747e843a1ce3","Type":"ContainerStarted","Data":"23422a7b75503dd397ffe6b0489311f0675d73c1a0af64202ebbd0e82326c27c"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.054948 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64","Type":"ContainerStarted","Data":"a9d5cdc12cef1772553133518c43788e7f1f7cf67356d0828db1d6f8ccc255e9"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.055014 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64","Type":"ContainerStarted","Data":"31d99e1592d55232f1a7447cee79dcf8a96aeff9fd75e6b5a9958b5b4586f1ab"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.055028 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64","Type":"ContainerStarted","Data":"90aff8914b3964c205ab9fa48c0153082d676fc90efc423a1ef24fea7a209362"} Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.055740 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.087438 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.087419026 podStartE2EDuration="2.087419026s" podCreationTimestamp="2026-03-09 19:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:49.08241373 +0000 UTC m=+2246.243789606" watchObservedRunningTime="2026-03-09 19:01:49.087419026 +0000 UTC m=+2246.248794892" Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.110024 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.110001699 podStartE2EDuration="2.110001699s" podCreationTimestamp="2026-03-09 19:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:49.100270504 +0000 UTC m=+2246.261646360" watchObservedRunningTime="2026-03-09 19:01:49.110001699 +0000 UTC m=+2246.271377575" Mar 09 19:01:49 crc kubenswrapper[4821]: I0309 19:01:49.137927 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.137905176 podStartE2EDuration="2.137905176s" podCreationTimestamp="2026-03-09 19:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:01:49.132875079 +0000 UTC m=+2246.294250945" watchObservedRunningTime="2026-03-09 19:01:49.137905176 +0000 UTC m=+2246.299281042" Mar 09 19:01:51 crc kubenswrapper[4821]: I0309 19:01:51.816156 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:52 crc kubenswrapper[4821]: I0309 19:01:52.713160 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:52 crc kubenswrapper[4821]: I0309 19:01:52.760550 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:57 crc kubenswrapper[4821]: I0309 19:01:57.713856 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:57 crc kubenswrapper[4821]: I0309 19:01:57.739987 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:57 crc kubenswrapper[4821]: I0309 19:01:57.745912 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:57 crc kubenswrapper[4821]: I0309 19:01:57.760159 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:57 crc kubenswrapper[4821]: I0309 19:01:57.769882 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:57 crc kubenswrapper[4821]: I0309 19:01:57.772304 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:58 crc kubenswrapper[4821]: I0309 19:01:58.132217 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:58 crc kubenswrapper[4821]: I0309 19:01:58.135450 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:01:58 crc kubenswrapper[4821]: I0309 19:01:58.152633 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:01:58 crc kubenswrapper[4821]: I0309 19:01:58.165160 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.562598 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.563043 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-central-agent" containerID="cri-o://a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76" gracePeriod=30 Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.565154 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="proxy-httpd" containerID="cri-o://4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33" gracePeriod=30 Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.565218 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="sg-core" containerID="cri-o://f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2" gracePeriod=30 Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.565251 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-notification-agent" containerID="cri-o://48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8" gracePeriod=30 Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.581690 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.173:3000/\": EOF" Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.913285 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.913354 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.915385 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-522pz"] Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.923609 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-522pz"] Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.949836 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherfac6-account-delete-txkh4"] Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.951215 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.957542 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherfac6-account-delete-txkh4"] Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.962239 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fc5e62-b5e8-4348-9f2f-f806d570155c-operator-scripts\") pod \"watcherfac6-account-delete-txkh4\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:01:59 crc kubenswrapper[4821]: I0309 19:01:59.962722 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5v2\" (UniqueName: \"kubernetes.io/projected/22fc5e62-b5e8-4348-9f2f-f806d570155c-kube-api-access-2k5v2\") pod \"watcherfac6-account-delete-txkh4\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.044976 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.065434 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fc5e62-b5e8-4348-9f2f-f806d570155c-operator-scripts\") pod \"watcherfac6-account-delete-txkh4\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.065631 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5v2\" (UniqueName: \"kubernetes.io/projected/22fc5e62-b5e8-4348-9f2f-f806d570155c-kube-api-access-2k5v2\") pod \"watcherfac6-account-delete-txkh4\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.066603 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fc5e62-b5e8-4348-9f2f-f806d570155c-operator-scripts\") pod \"watcherfac6-account-delete-txkh4\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.083113 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.095008 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5v2\" (UniqueName: \"kubernetes.io/projected/22fc5e62-b5e8-4348-9f2f-f806d570155c-kube-api-access-2k5v2\") pod \"watcherfac6-account-delete-txkh4\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.130113 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.169795 4821 generic.go:334] "Generic (PLEG): container finished" podID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerID="4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33" exitCode=0 Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.169830 4821 generic.go:334] "Generic (PLEG): container finished" podID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerID="f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2" exitCode=2 Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.169837 4821 generic.go:334] "Generic (PLEG): container finished" podID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerID="a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76" exitCode=0 Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170057 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-kuttl-api-log" containerID="cri-o://31d99e1592d55232f1a7447cee79dcf8a96aeff9fd75e6b5a9958b5b4586f1ab" gracePeriod=30 Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170133 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerDied","Data":"4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33"} Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170158 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerDied","Data":"f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2"} Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170166 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerDied","Data":"a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76"} Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170240 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" containerName="watcher-applier" containerID="cri-o://c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" gracePeriod=30 Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170500 4821 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-rhkbv\" not found" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.170691 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-api" containerID="cri-o://a9d5cdc12cef1772553133518c43788e7f1f7cf67356d0828db1d6f8ccc255e9" gracePeriod=30 Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.185743 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551382-wcqzq"] Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.187153 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.189766 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.190230 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.190445 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.221104 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551382-wcqzq"] Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.271001 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zr2m\" (UniqueName: \"kubernetes.io/projected/7fc4d5b1-3818-4f5c-91b9-afe46d95e537-kube-api-access-6zr2m\") pod \"auto-csr-approver-29551382-wcqzq\" (UID: \"7fc4d5b1-3818-4f5c-91b9-afe46d95e537\") " pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:00 crc kubenswrapper[4821]: E0309 19:02:00.271587 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:00 crc kubenswrapper[4821]: E0309 19:02:00.271662 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data podName:f95ec74e-8a1f-447e-89a9-747e843a1ce3 nodeName:}" failed. No retries permitted until 2026-03-09 19:02:00.77164201 +0000 UTC m=+2257.933017866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.272722 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.371930 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zr2m\" (UniqueName: \"kubernetes.io/projected/7fc4d5b1-3818-4f5c-91b9-afe46d95e537-kube-api-access-6zr2m\") pod \"auto-csr-approver-29551382-wcqzq\" (UID: \"7fc4d5b1-3818-4f5c-91b9-afe46d95e537\") " pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.394153 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zr2m\" (UniqueName: \"kubernetes.io/projected/7fc4d5b1-3818-4f5c-91b9-afe46d95e537-kube-api-access-6zr2m\") pod \"auto-csr-approver-29551382-wcqzq\" (UID: \"7fc4d5b1-3818-4f5c-91b9-afe46d95e537\") " pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.523745 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:00 crc kubenswrapper[4821]: E0309 19:02:00.778932 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:00 crc kubenswrapper[4821]: E0309 19:02:00.778990 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data podName:f95ec74e-8a1f-447e-89a9-747e843a1ce3 nodeName:}" failed. No retries permitted until 2026-03-09 19:02:01.778975507 +0000 UTC m=+2258.940351363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:00 crc kubenswrapper[4821]: I0309 19:02:00.829469 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherfac6-account-delete-txkh4"] Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.076804 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551382-wcqzq"] Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.186756 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.212395 4821 generic.go:334] "Generic (PLEG): container finished" podID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerID="48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8" exitCode=0 Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.212490 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerDied","Data":"48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8"} Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.212518 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ecd4d911-b650-4de9-b3c0-97d430fc3ab3","Type":"ContainerDied","Data":"0a6d008f7d1cf38b2e8e32e5a7186e8dc6bfb7fce23ff07e18c72c0abf6e15c9"} Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.212542 4821 scope.go:117] "RemoveContainer" containerID="4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.219864 4821 generic.go:334] "Generic (PLEG): container finished" podID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerID="a9d5cdc12cef1772553133518c43788e7f1f7cf67356d0828db1d6f8ccc255e9" exitCode=0 Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.220023 4821 generic.go:334] "Generic (PLEG): container finished" podID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerID="31d99e1592d55232f1a7447cee79dcf8a96aeff9fd75e6b5a9958b5b4586f1ab" exitCode=143 Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.220166 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64","Type":"ContainerDied","Data":"a9d5cdc12cef1772553133518c43788e7f1f7cf67356d0828db1d6f8ccc255e9"} Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.220280 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64","Type":"ContainerDied","Data":"31d99e1592d55232f1a7447cee79dcf8a96aeff9fd75e6b5a9958b5b4586f1ab"} Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.223276 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" event={"ID":"7fc4d5b1-3818-4f5c-91b9-afe46d95e537","Type":"ContainerStarted","Data":"12209beb572c91442904da45d5566db06fb193bb5a3d935d1cb66eb1dc01750a"} Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.226174 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="f95ec74e-8a1f-447e-89a9-747e843a1ce3" containerName="watcher-decision-engine" containerID="cri-o://2e4355930ab2401a750507a2f552335b07ac67c99f24ab0da89c8d75c3e3a0d6" gracePeriod=30 Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.227236 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" event={"ID":"22fc5e62-b5e8-4348-9f2f-f806d570155c","Type":"ContainerStarted","Data":"3dd92950ff7bf08939a81fca8505f4e6e55a71a8e83c44ab861bd570d8d28e14"} Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.254529 4821 scope.go:117] "RemoveContainer" containerID="f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.256990 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" podStartSLOduration=2.256959257 podStartE2EDuration="2.256959257s" podCreationTimestamp="2026-03-09 19:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:01.241162418 +0000 UTC m=+2258.402538274" watchObservedRunningTime="2026-03-09 19:02:01.256959257 +0000 UTC m=+2258.418335153" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.278437 4821 scope.go:117] "RemoveContainer" containerID="48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.296702 4821 scope.go:117] "RemoveContainer" containerID="a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299552 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-config-data\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299604 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4khg9\" (UniqueName: \"kubernetes.io/projected/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-kube-api-access-4khg9\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299633 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-sg-core-conf-yaml\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299654 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-scripts\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299675 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-log-httpd\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299696 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-run-httpd\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299735 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-combined-ca-bundle\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.299773 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-ceilometer-tls-certs\") pod \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\" (UID: \"ecd4d911-b650-4de9-b3c0-97d430fc3ab3\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.300283 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.303411 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.305102 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-kube-api-access-4khg9" (OuterVolumeSpecName: "kube-api-access-4khg9") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "kube-api-access-4khg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.317727 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-scripts" (OuterVolumeSpecName: "scripts") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.332334 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.332508 4821 scope.go:117] "RemoveContainer" containerID="4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33" Mar 09 19:02:01 crc kubenswrapper[4821]: E0309 19:02:01.338473 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33\": container with ID starting with 4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33 not found: ID does not exist" containerID="4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.338512 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33"} err="failed to get container status \"4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33\": rpc error: code = NotFound desc = could not find container \"4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33\": container with ID starting with 4acea51b58126f993615330dc6f1063502c8596ca6cc10b851943c566684bc33 not found: ID does not exist" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.338559 4821 scope.go:117] "RemoveContainer" containerID="f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2" Mar 09 19:02:01 crc kubenswrapper[4821]: E0309 19:02:01.345414 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2\": container with ID starting with f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2 not found: ID does not exist" containerID="f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.345574 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2"} err="failed to get container status \"f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2\": rpc error: code = NotFound desc = could not find container \"f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2\": container with ID starting with f2c8dcc9f3c9313bfc8325d5704b1ce8641230155b3cc23684507d2ea04502f2 not found: ID does not exist" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.345658 4821 scope.go:117] "RemoveContainer" containerID="48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8" Mar 09 19:02:01 crc kubenswrapper[4821]: E0309 19:02:01.346623 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8\": container with ID starting with 48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8 not found: ID does not exist" containerID="48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.346661 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8"} err="failed to get container status \"48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8\": rpc error: code = NotFound desc = could not find container \"48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8\": container with ID starting with 48a04adeb6e61ec609bd3bc852d7530c6cbf27cde85c4330e4b01d0ecd9ac8e8 not found: ID does not exist" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.346686 4821 scope.go:117] "RemoveContainer" containerID="a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76" Mar 09 19:02:01 crc kubenswrapper[4821]: E0309 19:02:01.346923 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76\": container with ID starting with a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76 not found: ID does not exist" containerID="a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.346946 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76"} err="failed to get container status \"a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76\": rpc error: code = NotFound desc = could not find container \"a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76\": container with ID starting with a90da69b48ed61779178febd09e1602cf186b8cc764689db66d24294d7cddb76 not found: ID does not exist" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.357556 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.403695 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4khg9\" (UniqueName: \"kubernetes.io/projected/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-kube-api-access-4khg9\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.403739 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.403752 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.403763 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.403776 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.403791 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.414655 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.480446 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-config-data" (OuterVolumeSpecName: "config-data") pod "ecd4d911-b650-4de9-b3c0-97d430fc3ab3" (UID: "ecd4d911-b650-4de9-b3c0-97d430fc3ab3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.504919 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.504955 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd4d911-b650-4de9-b3c0-97d430fc3ab3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.530034 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.561272 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29c14d1-d5d2-413a-bae1-b117c9858d96" path="/var/lib/kubelet/pods/e29c14d1-d5d2-413a-bae1-b117c9858d96/volumes" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.606142 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-config-data\") pod \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.606231 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-combined-ca-bundle\") pod \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.606272 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-custom-prometheus-ca\") pod \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.606301 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-logs\") pod \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.606369 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrf49\" (UniqueName: \"kubernetes.io/projected/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-kube-api-access-xrf49\") pod \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\" (UID: \"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64\") " Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.606787 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-logs" (OuterVolumeSpecName: "logs") pod "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" (UID: "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.620615 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-kube-api-access-xrf49" (OuterVolumeSpecName: "kube-api-access-xrf49") pod "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" (UID: "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64"). InnerVolumeSpecName "kube-api-access-xrf49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.630622 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" (UID: "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.643630 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" (UID: "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.665232 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-config-data" (OuterVolumeSpecName: "config-data") pod "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" (UID: "e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.708066 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrf49\" (UniqueName: \"kubernetes.io/projected/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-kube-api-access-xrf49\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.708095 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.708105 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.708116 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: I0309 19:02:01.708127 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:01 crc kubenswrapper[4821]: E0309 19:02:01.809659 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:01 crc kubenswrapper[4821]: E0309 19:02:01.810041 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data podName:f95ec74e-8a1f-447e-89a9-747e843a1ce3 nodeName:}" failed. No retries permitted until 2026-03-09 19:02:03.810018345 +0000 UTC m=+2260.971394201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.239191 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64","Type":"ContainerDied","Data":"90aff8914b3964c205ab9fa48c0153082d676fc90efc423a1ef24fea7a209362"} Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.239244 4821 scope.go:117] "RemoveContainer" containerID="a9d5cdc12cef1772553133518c43788e7f1f7cf67356d0828db1d6f8ccc255e9" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.239396 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.243925 4821 generic.go:334] "Generic (PLEG): container finished" podID="22fc5e62-b5e8-4348-9f2f-f806d570155c" containerID="b50be02ae45820230e0e491d2c66ce64026d9a3d8b80a7f8f158693e9428d4cd" exitCode=0 Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.244075 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" event={"ID":"22fc5e62-b5e8-4348-9f2f-f806d570155c","Type":"ContainerDied","Data":"b50be02ae45820230e0e491d2c66ce64026d9a3d8b80a7f8f158693e9428d4cd"} Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.251066 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.281400 4821 scope.go:117] "RemoveContainer" containerID="31d99e1592d55232f1a7447cee79dcf8a96aeff9fd75e6b5a9958b5b4586f1ab" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.310537 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.337929 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.362452 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.365474 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373376 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.373768 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-api" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373783 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-api" Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.373802 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-central-agent" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373810 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-central-agent" Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.373824 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-notification-agent" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373834 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-notification-agent" Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.373846 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-kuttl-api-log" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373855 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-kuttl-api-log" Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.373878 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="proxy-httpd" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373885 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="proxy-httpd" Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.373900 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="sg-core" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.373907 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="sg-core" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.374094 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-notification-agent" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.374108 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="sg-core" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.374119 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-kuttl-api-log" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.374128 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="proxy-httpd" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.374140 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" containerName="ceilometer-central-agent" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.374152 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" containerName="watcher-api" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.375934 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.381692 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.381796 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.383135 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.390804 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448218 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-run-httpd\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448253 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-log-httpd\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448284 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpkh4\" (UniqueName: \"kubernetes.io/projected/c6c5ce68-4841-475a-8f97-adceb433c645-kube-api-access-hpkh4\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448308 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448359 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448390 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448438 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-scripts\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.448464 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-config-data\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549382 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-scripts\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549468 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-config-data\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549520 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-run-httpd\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549552 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-log-httpd\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549595 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpkh4\" (UniqueName: \"kubernetes.io/projected/c6c5ce68-4841-475a-8f97-adceb433c645-kube-api-access-hpkh4\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549623 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549660 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.549690 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.550607 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-run-httpd\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.551141 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-log-httpd\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.553848 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.557498 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.566690 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpkh4\" (UniqueName: \"kubernetes.io/projected/c6c5ce68-4841-475a-8f97-adceb433c645-kube-api-access-hpkh4\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.569877 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-scripts\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.571026 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.576541 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-config-data\") pod \"ceilometer-0\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.705700 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.716241 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.721838 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:02:02 crc kubenswrapper[4821]: I0309 19:02:02.725649 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.730448 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:02:02 crc kubenswrapper[4821]: E0309 19:02:02.730521 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" containerName="watcher-applier" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.238173 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:03 crc kubenswrapper[4821]: W0309 19:02:03.242169 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c5ce68_4841_475a_8f97_adceb433c645.slice/crio-14a0fe6d94637c15ff43a262ea4aaecfbbfc06c134d68108a695f87fc189d14c WatchSource:0}: Error finding container 14a0fe6d94637c15ff43a262ea4aaecfbbfc06c134d68108a695f87fc189d14c: Status 404 returned error can't find the container with id 14a0fe6d94637c15ff43a262ea4aaecfbbfc06c134d68108a695f87fc189d14c Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.260823 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerStarted","Data":"14a0fe6d94637c15ff43a262ea4aaecfbbfc06c134d68108a695f87fc189d14c"} Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.262154 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" event={"ID":"7fc4d5b1-3818-4f5c-91b9-afe46d95e537","Type":"ContainerStarted","Data":"4eb5fc9676556af2c13adac98ad2e0f465c105fa4f530d034ccc19cbb29b171c"} Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.285899 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" podStartSLOduration=1.568178483 podStartE2EDuration="3.285875594s" podCreationTimestamp="2026-03-09 19:02:00 +0000 UTC" firstStartedPulling="2026-03-09 19:02:01.152507863 +0000 UTC m=+2258.313883719" lastFinishedPulling="2026-03-09 19:02:02.870204974 +0000 UTC m=+2260.031580830" observedRunningTime="2026-03-09 19:02:03.275440331 +0000 UTC m=+2260.436816187" watchObservedRunningTime="2026-03-09 19:02:03.285875594 +0000 UTC m=+2260.447251450" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.576512 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64" path="/var/lib/kubelet/pods/e10f4d2b-6099-4cdd-8f41-d3ad88ea2a64/volumes" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.584347 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd4d911-b650-4de9-b3c0-97d430fc3ab3" path="/var/lib/kubelet/pods/ecd4d911-b650-4de9-b3c0-97d430fc3ab3/volumes" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.600000 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.668127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k5v2\" (UniqueName: \"kubernetes.io/projected/22fc5e62-b5e8-4348-9f2f-f806d570155c-kube-api-access-2k5v2\") pod \"22fc5e62-b5e8-4348-9f2f-f806d570155c\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.668286 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fc5e62-b5e8-4348-9f2f-f806d570155c-operator-scripts\") pod \"22fc5e62-b5e8-4348-9f2f-f806d570155c\" (UID: \"22fc5e62-b5e8-4348-9f2f-f806d570155c\") " Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.672379 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22fc5e62-b5e8-4348-9f2f-f806d570155c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22fc5e62-b5e8-4348-9f2f-f806d570155c" (UID: "22fc5e62-b5e8-4348-9f2f-f806d570155c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.673108 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22fc5e62-b5e8-4348-9f2f-f806d570155c-kube-api-access-2k5v2" (OuterVolumeSpecName: "kube-api-access-2k5v2") pod "22fc5e62-b5e8-4348-9f2f-f806d570155c" (UID: "22fc5e62-b5e8-4348-9f2f-f806d570155c"). InnerVolumeSpecName "kube-api-access-2k5v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.775898 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k5v2\" (UniqueName: \"kubernetes.io/projected/22fc5e62-b5e8-4348-9f2f-f806d570155c-kube-api-access-2k5v2\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:03 crc kubenswrapper[4821]: I0309 19:02:03.775942 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22fc5e62-b5e8-4348-9f2f-f806d570155c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:03 crc kubenswrapper[4821]: E0309 19:02:03.876924 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:03 crc kubenswrapper[4821]: E0309 19:02:03.876980 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data podName:f95ec74e-8a1f-447e-89a9-747e843a1ce3 nodeName:}" failed. No retries permitted until 2026-03-09 19:02:07.876966802 +0000 UTC m=+2265.038342648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.271784 4821 generic.go:334] "Generic (PLEG): container finished" podID="7fc4d5b1-3818-4f5c-91b9-afe46d95e537" containerID="4eb5fc9676556af2c13adac98ad2e0f465c105fa4f530d034ccc19cbb29b171c" exitCode=0 Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.271832 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" event={"ID":"7fc4d5b1-3818-4f5c-91b9-afe46d95e537","Type":"ContainerDied","Data":"4eb5fc9676556af2c13adac98ad2e0f465c105fa4f530d034ccc19cbb29b171c"} Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.275088 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" event={"ID":"22fc5e62-b5e8-4348-9f2f-f806d570155c","Type":"ContainerDied","Data":"3dd92950ff7bf08939a81fca8505f4e6e55a71a8e83c44ab861bd570d8d28e14"} Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.275124 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfac6-account-delete-txkh4" Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.275145 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd92950ff7bf08939a81fca8505f4e6e55a71a8e83c44ab861bd570d8d28e14" Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.276805 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerStarted","Data":"84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3"} Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.867900 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.990518 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vvt6t"] Mar 09 19:02:04 crc kubenswrapper[4821]: I0309 19:02:04.998232 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vvt6t"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:04.999967 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-logs\") pod \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.000183 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-config-data\") pod \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.000223 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7hwt\" (UniqueName: \"kubernetes.io/projected/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-kube-api-access-j7hwt\") pod \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.000247 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-combined-ca-bundle\") pod \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\" (UID: \"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6\") " Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.000291 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-logs" (OuterVolumeSpecName: "logs") pod "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" (UID: "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.000721 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.004526 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-kube-api-access-j7hwt" (OuterVolumeSpecName: "kube-api-access-j7hwt") pod "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" (UID: "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6"). InnerVolumeSpecName "kube-api-access-j7hwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.004821 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherfac6-account-delete-txkh4"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.010526 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.016259 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherfac6-account-delete-txkh4"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.023135 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-fac6-account-create-update-gmsz7"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.026760 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" (UID: "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.047156 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-config-data" (OuterVolumeSpecName: "config-data") pod "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" (UID: "caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.105084 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.105134 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7hwt\" (UniqueName: \"kubernetes.io/projected/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-kube-api-access-j7hwt\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.105146 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.287175 4821 generic.go:334] "Generic (PLEG): container finished" podID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" exitCode=0 Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.287258 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6","Type":"ContainerDied","Data":"c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4"} Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.287532 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6","Type":"ContainerDied","Data":"c79c373f567a9e0db1bdc24f59329b3b559fe9e8e83029ba3d33a9d781a24738"} Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.287554 4821 scope.go:117] "RemoveContainer" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.287399 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.295297 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerStarted","Data":"ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28"} Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.295360 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerStarted","Data":"62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a"} Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.336617 4821 scope.go:117] "RemoveContainer" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" Mar 09 19:02:05 crc kubenswrapper[4821]: E0309 19:02:05.337562 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4\": container with ID starting with c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4 not found: ID does not exist" containerID="c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.337594 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4"} err="failed to get container status \"c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4\": rpc error: code = NotFound desc = could not find container \"c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4\": container with ID starting with c4069136c5ec19700c5da11cce63ab0c1f652d4360b9b669933a9f298ece8ff4 not found: ID does not exist" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.340824 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.348288 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.560374 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22fc5e62-b5e8-4348-9f2f-f806d570155c" path="/var/lib/kubelet/pods/22fc5e62-b5e8-4348-9f2f-f806d570155c/volumes" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.560863 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ea4a34-8ad2-4c0e-a139-bca978c3da6a" path="/var/lib/kubelet/pods/88ea4a34-8ad2-4c0e-a139-bca978c3da6a/volumes" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.561304 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5406c9a-7ea7-491a-b625-af6eaffeeaac" path="/var/lib/kubelet/pods/c5406c9a-7ea7-491a-b625-af6eaffeeaac/volumes" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.562262 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" path="/var/lib/kubelet/pods/caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6/volumes" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.632882 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.717183 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zr2m\" (UniqueName: \"kubernetes.io/projected/7fc4d5b1-3818-4f5c-91b9-afe46d95e537-kube-api-access-6zr2m\") pod \"7fc4d5b1-3818-4f5c-91b9-afe46d95e537\" (UID: \"7fc4d5b1-3818-4f5c-91b9-afe46d95e537\") " Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.728840 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc4d5b1-3818-4f5c-91b9-afe46d95e537-kube-api-access-6zr2m" (OuterVolumeSpecName: "kube-api-access-6zr2m") pod "7fc4d5b1-3818-4f5c-91b9-afe46d95e537" (UID: "7fc4d5b1-3818-4f5c-91b9-afe46d95e537"). InnerVolumeSpecName "kube-api-access-6zr2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:05 crc kubenswrapper[4821]: I0309 19:02:05.818774 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zr2m\" (UniqueName: \"kubernetes.io/projected/7fc4d5b1-3818-4f5c-91b9-afe46d95e537-kube-api-access-6zr2m\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.315532 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" event={"ID":"7fc4d5b1-3818-4f5c-91b9-afe46d95e537","Type":"ContainerDied","Data":"12209beb572c91442904da45d5566db06fb193bb5a3d935d1cb66eb1dc01750a"} Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.315573 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12209beb572c91442904da45d5566db06fb193bb5a3d935d1cb66eb1dc01750a" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.315630 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551382-wcqzq" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.329648 4821 generic.go:334] "Generic (PLEG): container finished" podID="f95ec74e-8a1f-447e-89a9-747e843a1ce3" containerID="2e4355930ab2401a750507a2f552335b07ac67c99f24ab0da89c8d75c3e3a0d6" exitCode=0 Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.329702 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f95ec74e-8a1f-447e-89a9-747e843a1ce3","Type":"ContainerDied","Data":"2e4355930ab2401a750507a2f552335b07ac67c99f24ab0da89c8d75c3e3a0d6"} Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.355378 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551376-zk95z"] Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.360886 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551376-zk95z"] Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.481758 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.529127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktdsq\" (UniqueName: \"kubernetes.io/projected/f95ec74e-8a1f-447e-89a9-747e843a1ce3-kube-api-access-ktdsq\") pod \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.529223 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95ec74e-8a1f-447e-89a9-747e843a1ce3-logs\") pod \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.529301 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data\") pod \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.529357 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-combined-ca-bundle\") pod \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.529378 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-custom-prometheus-ca\") pod \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\" (UID: \"f95ec74e-8a1f-447e-89a9-747e843a1ce3\") " Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.530527 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f95ec74e-8a1f-447e-89a9-747e843a1ce3-logs" (OuterVolumeSpecName: "logs") pod "f95ec74e-8a1f-447e-89a9-747e843a1ce3" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.534542 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f95ec74e-8a1f-447e-89a9-747e843a1ce3-kube-api-access-ktdsq" (OuterVolumeSpecName: "kube-api-access-ktdsq") pod "f95ec74e-8a1f-447e-89a9-747e843a1ce3" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3"). InnerVolumeSpecName "kube-api-access-ktdsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.551811 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f95ec74e-8a1f-447e-89a9-747e843a1ce3" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.579928 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f95ec74e-8a1f-447e-89a9-747e843a1ce3" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.583532 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data" (OuterVolumeSpecName: "config-data") pod "f95ec74e-8a1f-447e-89a9-747e843a1ce3" (UID: "f95ec74e-8a1f-447e-89a9-747e843a1ce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.631085 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f95ec74e-8a1f-447e-89a9-747e843a1ce3-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.631126 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.631141 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.631156 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f95ec74e-8a1f-447e-89a9-747e843a1ce3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:06 crc kubenswrapper[4821]: I0309 19:02:06.631169 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktdsq\" (UniqueName: \"kubernetes.io/projected/f95ec74e-8a1f-447e-89a9-747e843a1ce3-kube-api-access-ktdsq\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.338342 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"f95ec74e-8a1f-447e-89a9-747e843a1ce3","Type":"ContainerDied","Data":"23422a7b75503dd397ffe6b0489311f0675d73c1a0af64202ebbd0e82326c27c"} Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.339115 4821 scope.go:117] "RemoveContainer" containerID="2e4355930ab2401a750507a2f552335b07ac67c99f24ab0da89c8d75c3e3a0d6" Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.338568 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.342329 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerStarted","Data":"3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0"} Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.342515 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-central-agent" containerID="cri-o://84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3" gracePeriod=30 Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.342784 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.342841 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="proxy-httpd" containerID="cri-o://3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0" gracePeriod=30 Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.342894 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="sg-core" containerID="cri-o://ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28" gracePeriod=30 Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.342942 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-notification-agent" containerID="cri-o://62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a" gracePeriod=30 Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.370906 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.5787025479999999 podStartE2EDuration="5.370888421s" podCreationTimestamp="2026-03-09 19:02:02 +0000 UTC" firstStartedPulling="2026-03-09 19:02:03.244073549 +0000 UTC m=+2260.405449405" lastFinishedPulling="2026-03-09 19:02:07.036259422 +0000 UTC m=+2264.197635278" observedRunningTime="2026-03-09 19:02:07.367289414 +0000 UTC m=+2264.528665290" watchObservedRunningTime="2026-03-09 19:02:07.370888421 +0000 UTC m=+2264.532264277" Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.387734 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.392820 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.561661 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f95ec74e-8a1f-447e-89a9-747e843a1ce3" path="/var/lib/kubelet/pods/f95ec74e-8a1f-447e-89a9-747e843a1ce3/volumes" Mar 09 19:02:07 crc kubenswrapper[4821]: I0309 19:02:07.562476 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa84d5e7-6e13-4c0b-b03e-7671041bfbad" path="/var/lib/kubelet/pods/fa84d5e7-6e13-4c0b-b03e-7671041bfbad/volumes" Mar 09 19:02:08 crc kubenswrapper[4821]: I0309 19:02:08.352409 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6c5ce68-4841-475a-8f97-adceb433c645" containerID="ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28" exitCode=2 Mar 09 19:02:08 crc kubenswrapper[4821]: I0309 19:02:08.353726 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6c5ce68-4841-475a-8f97-adceb433c645" containerID="62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a" exitCode=0 Mar 09 19:02:08 crc kubenswrapper[4821]: I0309 19:02:08.353833 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6c5ce68-4841-475a-8f97-adceb433c645" containerID="84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3" exitCode=0 Mar 09 19:02:08 crc kubenswrapper[4821]: I0309 19:02:08.352493 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerDied","Data":"ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28"} Mar 09 19:02:08 crc kubenswrapper[4821]: I0309 19:02:08.353939 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerDied","Data":"62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a"} Mar 09 19:02:08 crc kubenswrapper[4821]: I0309 19:02:08.353955 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerDied","Data":"84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3"} Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.923995 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-jndn9"] Mar 09 19:02:09 crc kubenswrapper[4821]: E0309 19:02:09.924720 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" containerName="watcher-applier" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.924738 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" containerName="watcher-applier" Mar 09 19:02:09 crc kubenswrapper[4821]: E0309 19:02:09.924758 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f95ec74e-8a1f-447e-89a9-747e843a1ce3" containerName="watcher-decision-engine" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.924767 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f95ec74e-8a1f-447e-89a9-747e843a1ce3" containerName="watcher-decision-engine" Mar 09 19:02:09 crc kubenswrapper[4821]: E0309 19:02:09.924784 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22fc5e62-b5e8-4348-9f2f-f806d570155c" containerName="mariadb-account-delete" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.924793 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="22fc5e62-b5e8-4348-9f2f-f806d570155c" containerName="mariadb-account-delete" Mar 09 19:02:09 crc kubenswrapper[4821]: E0309 19:02:09.924811 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc4d5b1-3818-4f5c-91b9-afe46d95e537" containerName="oc" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.924818 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc4d5b1-3818-4f5c-91b9-afe46d95e537" containerName="oc" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.924989 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="caafbc68-64ac-4bd1-b35d-e8d0b1ad0cf6" containerName="watcher-applier" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.925010 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f95ec74e-8a1f-447e-89a9-747e843a1ce3" containerName="watcher-decision-engine" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.925023 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc4d5b1-3818-4f5c-91b9-afe46d95e537" containerName="oc" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.925045 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="22fc5e62-b5e8-4348-9f2f-f806d570155c" containerName="mariadb-account-delete" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.925718 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.944356 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jndn9"] Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.952511 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-a975-account-create-update-hdfhw"] Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.954252 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.956525 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.971108 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-a975-account-create-update-hdfhw"] Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.990082 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-operator-scripts\") pod \"watcher-db-create-jndn9\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:09 crc kubenswrapper[4821]: I0309 19:02:09.990158 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mwj\" (UniqueName: \"kubernetes.io/projected/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-kube-api-access-58mwj\") pod \"watcher-db-create-jndn9\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.091837 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-operator-scripts\") pod \"watcher-db-create-jndn9\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.091892 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0641355-eafb-410b-ad92-26836542589f-operator-scripts\") pod \"watcher-a975-account-create-update-hdfhw\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.091957 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58mwj\" (UniqueName: \"kubernetes.io/projected/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-kube-api-access-58mwj\") pod \"watcher-db-create-jndn9\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.092018 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8s6\" (UniqueName: \"kubernetes.io/projected/f0641355-eafb-410b-ad92-26836542589f-kube-api-access-ld8s6\") pod \"watcher-a975-account-create-update-hdfhw\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.092779 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-operator-scripts\") pod \"watcher-db-create-jndn9\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.140928 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58mwj\" (UniqueName: \"kubernetes.io/projected/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-kube-api-access-58mwj\") pod \"watcher-db-create-jndn9\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.194171 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8s6\" (UniqueName: \"kubernetes.io/projected/f0641355-eafb-410b-ad92-26836542589f-kube-api-access-ld8s6\") pod \"watcher-a975-account-create-update-hdfhw\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.194267 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0641355-eafb-410b-ad92-26836542589f-operator-scripts\") pod \"watcher-a975-account-create-update-hdfhw\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.195034 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0641355-eafb-410b-ad92-26836542589f-operator-scripts\") pod \"watcher-a975-account-create-update-hdfhw\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.214571 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8s6\" (UniqueName: \"kubernetes.io/projected/f0641355-eafb-410b-ad92-26836542589f-kube-api-access-ld8s6\") pod \"watcher-a975-account-create-update-hdfhw\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.246037 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.275357 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.759911 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jndn9"] Mar 09 19:02:10 crc kubenswrapper[4821]: W0309 19:02:10.765602 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod281d4c83_a6b3_4a94_b7eb_d200497f1a9a.slice/crio-b26a2a099e9e2d8251c2914968cfb3849ad7ab2908b35db60f27ff3d30a42799 WatchSource:0}: Error finding container b26a2a099e9e2d8251c2914968cfb3849ad7ab2908b35db60f27ff3d30a42799: Status 404 returned error can't find the container with id b26a2a099e9e2d8251c2914968cfb3849ad7ab2908b35db60f27ff3d30a42799 Mar 09 19:02:10 crc kubenswrapper[4821]: I0309 19:02:10.775150 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-a975-account-create-update-hdfhw"] Mar 09 19:02:10 crc kubenswrapper[4821]: W0309 19:02:10.777450 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0641355_eafb_410b_ad92_26836542589f.slice/crio-808e3494f237f96c4d9b01012a4b7a656a88c908a2dee844451475a16b49b393 WatchSource:0}: Error finding container 808e3494f237f96c4d9b01012a4b7a656a88c908a2dee844451475a16b49b393: Status 404 returned error can't find the container with id 808e3494f237f96c4d9b01012a4b7a656a88c908a2dee844451475a16b49b393 Mar 09 19:02:11 crc kubenswrapper[4821]: I0309 19:02:11.408174 4821 generic.go:334] "Generic (PLEG): container finished" podID="f0641355-eafb-410b-ad92-26836542589f" containerID="46b6ccb57cbbc6bdd2b6c5f6bdeb38949ed0291892d21bff7357c585f6f8460b" exitCode=0 Mar 09 19:02:11 crc kubenswrapper[4821]: I0309 19:02:11.408270 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" event={"ID":"f0641355-eafb-410b-ad92-26836542589f","Type":"ContainerDied","Data":"46b6ccb57cbbc6bdd2b6c5f6bdeb38949ed0291892d21bff7357c585f6f8460b"} Mar 09 19:02:11 crc kubenswrapper[4821]: I0309 19:02:11.408444 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" event={"ID":"f0641355-eafb-410b-ad92-26836542589f","Type":"ContainerStarted","Data":"808e3494f237f96c4d9b01012a4b7a656a88c908a2dee844451475a16b49b393"} Mar 09 19:02:11 crc kubenswrapper[4821]: I0309 19:02:11.409807 4821 generic.go:334] "Generic (PLEG): container finished" podID="281d4c83-a6b3-4a94-b7eb-d200497f1a9a" containerID="b2faeed741e7e59c22a0de2bd237de590f3c9cd0eec1d72e47e72b83478f7743" exitCode=0 Mar 09 19:02:11 crc kubenswrapper[4821]: I0309 19:02:11.409839 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jndn9" event={"ID":"281d4c83-a6b3-4a94-b7eb-d200497f1a9a","Type":"ContainerDied","Data":"b2faeed741e7e59c22a0de2bd237de590f3c9cd0eec1d72e47e72b83478f7743"} Mar 09 19:02:11 crc kubenswrapper[4821]: I0309 19:02:11.409864 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jndn9" event={"ID":"281d4c83-a6b3-4a94-b7eb-d200497f1a9a","Type":"ContainerStarted","Data":"b26a2a099e9e2d8251c2914968cfb3849ad7ab2908b35db60f27ff3d30a42799"} Mar 09 19:02:12 crc kubenswrapper[4821]: I0309 19:02:12.871413 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:12 crc kubenswrapper[4821]: I0309 19:02:12.931813 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:12 crc kubenswrapper[4821]: I0309 19:02:12.970127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld8s6\" (UniqueName: \"kubernetes.io/projected/f0641355-eafb-410b-ad92-26836542589f-kube-api-access-ld8s6\") pod \"f0641355-eafb-410b-ad92-26836542589f\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " Mar 09 19:02:12 crc kubenswrapper[4821]: I0309 19:02:12.970368 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0641355-eafb-410b-ad92-26836542589f-operator-scripts\") pod \"f0641355-eafb-410b-ad92-26836542589f\" (UID: \"f0641355-eafb-410b-ad92-26836542589f\") " Mar 09 19:02:12 crc kubenswrapper[4821]: I0309 19:02:12.971173 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0641355-eafb-410b-ad92-26836542589f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0641355-eafb-410b-ad92-26836542589f" (UID: "f0641355-eafb-410b-ad92-26836542589f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:02:12 crc kubenswrapper[4821]: I0309 19:02:12.983624 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0641355-eafb-410b-ad92-26836542589f-kube-api-access-ld8s6" (OuterVolumeSpecName: "kube-api-access-ld8s6") pod "f0641355-eafb-410b-ad92-26836542589f" (UID: "f0641355-eafb-410b-ad92-26836542589f"). InnerVolumeSpecName "kube-api-access-ld8s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.072336 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-operator-scripts\") pod \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.072383 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58mwj\" (UniqueName: \"kubernetes.io/projected/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-kube-api-access-58mwj\") pod \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\" (UID: \"281d4c83-a6b3-4a94-b7eb-d200497f1a9a\") " Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.072911 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0641355-eafb-410b-ad92-26836542589f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.072938 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld8s6\" (UniqueName: \"kubernetes.io/projected/f0641355-eafb-410b-ad92-26836542589f-kube-api-access-ld8s6\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.073054 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "281d4c83-a6b3-4a94-b7eb-d200497f1a9a" (UID: "281d4c83-a6b3-4a94-b7eb-d200497f1a9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.076474 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-kube-api-access-58mwj" (OuterVolumeSpecName: "kube-api-access-58mwj") pod "281d4c83-a6b3-4a94-b7eb-d200497f1a9a" (UID: "281d4c83-a6b3-4a94-b7eb-d200497f1a9a"). InnerVolumeSpecName "kube-api-access-58mwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.175474 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.175555 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58mwj\" (UniqueName: \"kubernetes.io/projected/281d4c83-a6b3-4a94-b7eb-d200497f1a9a-kube-api-access-58mwj\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.432077 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" event={"ID":"f0641355-eafb-410b-ad92-26836542589f","Type":"ContainerDied","Data":"808e3494f237f96c4d9b01012a4b7a656a88c908a2dee844451475a16b49b393"} Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.432127 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="808e3494f237f96c4d9b01012a4b7a656a88c908a2dee844451475a16b49b393" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.432190 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-a975-account-create-update-hdfhw" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.434403 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jndn9" event={"ID":"281d4c83-a6b3-4a94-b7eb-d200497f1a9a","Type":"ContainerDied","Data":"b26a2a099e9e2d8251c2914968cfb3849ad7ab2908b35db60f27ff3d30a42799"} Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.434483 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26a2a099e9e2d8251c2914968cfb3849ad7ab2908b35db60f27ff3d30a42799" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.434480 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jndn9" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.886990 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vhn27"] Mar 09 19:02:13 crc kubenswrapper[4821]: E0309 19:02:13.887768 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0641355-eafb-410b-ad92-26836542589f" containerName="mariadb-account-create-update" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.887787 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0641355-eafb-410b-ad92-26836542589f" containerName="mariadb-account-create-update" Mar 09 19:02:13 crc kubenswrapper[4821]: E0309 19:02:13.887806 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281d4c83-a6b3-4a94-b7eb-d200497f1a9a" containerName="mariadb-database-create" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.887814 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="281d4c83-a6b3-4a94-b7eb-d200497f1a9a" containerName="mariadb-database-create" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.888008 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0641355-eafb-410b-ad92-26836542589f" containerName="mariadb-account-create-update" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.888035 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="281d4c83-a6b3-4a94-b7eb-d200497f1a9a" containerName="mariadb-database-create" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.889191 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.897268 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhn27"] Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.988216 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2155e5-7524-47f7-8c00-80c2ab292588-utilities\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.988308 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfthr\" (UniqueName: \"kubernetes.io/projected/df2155e5-7524-47f7-8c00-80c2ab292588-kube-api-access-wfthr\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:13 crc kubenswrapper[4821]: I0309 19:02:13.988441 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2155e5-7524-47f7-8c00-80c2ab292588-catalog-content\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.090113 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2155e5-7524-47f7-8c00-80c2ab292588-utilities\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.090170 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfthr\" (UniqueName: \"kubernetes.io/projected/df2155e5-7524-47f7-8c00-80c2ab292588-kube-api-access-wfthr\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.090219 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2155e5-7524-47f7-8c00-80c2ab292588-catalog-content\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.090765 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2155e5-7524-47f7-8c00-80c2ab292588-utilities\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.090783 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2155e5-7524-47f7-8c00-80c2ab292588-catalog-content\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.124442 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfthr\" (UniqueName: \"kubernetes.io/projected/df2155e5-7524-47f7-8c00-80c2ab292588-kube-api-access-wfthr\") pod \"certified-operators-vhn27\" (UID: \"df2155e5-7524-47f7-8c00-80c2ab292588\") " pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.207283 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:14 crc kubenswrapper[4821]: I0309 19:02:14.679115 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhn27"] Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.285440 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp"] Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.286998 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.288827 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-n8f65" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.289876 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.296535 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp"] Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.418790 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.418867 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-config-data\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.418984 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.419097 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cmxx\" (UniqueName: \"kubernetes.io/projected/38a9453b-15e6-4dc8-baaa-8f046f60cad8-kube-api-access-8cmxx\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.450280 4821 generic.go:334] "Generic (PLEG): container finished" podID="df2155e5-7524-47f7-8c00-80c2ab292588" containerID="06885125e9fbebe8b910db8363484762005857abb5d49fd81834f020b15331be" exitCode=0 Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.450350 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhn27" event={"ID":"df2155e5-7524-47f7-8c00-80c2ab292588","Type":"ContainerDied","Data":"06885125e9fbebe8b910db8363484762005857abb5d49fd81834f020b15331be"} Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.450379 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhn27" event={"ID":"df2155e5-7524-47f7-8c00-80c2ab292588","Type":"ContainerStarted","Data":"46da618510045cf81a82c634ebe9d9221fedac66018e0137386a38cadd84c488"} Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.520718 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.520789 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-config-data\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.520835 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.520904 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cmxx\" (UniqueName: \"kubernetes.io/projected/38a9453b-15e6-4dc8-baaa-8f046f60cad8-kube-api-access-8cmxx\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.530070 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.530254 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.530386 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-config-data\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.557086 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cmxx\" (UniqueName: \"kubernetes.io/projected/38a9453b-15e6-4dc8-baaa-8f046f60cad8-kube-api-access-8cmxx\") pod \"watcher-kuttl-db-sync-q2gsp\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:15 crc kubenswrapper[4821]: I0309 19:02:15.603147 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:16 crc kubenswrapper[4821]: I0309 19:02:16.066369 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp"] Mar 09 19:02:16 crc kubenswrapper[4821]: I0309 19:02:16.458707 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" event={"ID":"38a9453b-15e6-4dc8-baaa-8f046f60cad8","Type":"ContainerStarted","Data":"323fef1d8bdfa94be4aeb04c244a1872a2d564555e5f4e9059d8ac8b534bc4b4"} Mar 09 19:02:16 crc kubenswrapper[4821]: I0309 19:02:16.459066 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" event={"ID":"38a9453b-15e6-4dc8-baaa-8f046f60cad8","Type":"ContainerStarted","Data":"b370f3196bacb16cc7367da7b0e1413257688fad7669752893d88a1f299ed8c1"} Mar 09 19:02:16 crc kubenswrapper[4821]: I0309 19:02:16.475060 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" podStartSLOduration=1.47504343 podStartE2EDuration="1.47504343s" podCreationTimestamp="2026-03-09 19:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:16.47468063 +0000 UTC m=+2273.636056476" watchObservedRunningTime="2026-03-09 19:02:16.47504343 +0000 UTC m=+2273.636419286" Mar 09 19:02:19 crc kubenswrapper[4821]: I0309 19:02:19.483409 4821 generic.go:334] "Generic (PLEG): container finished" podID="38a9453b-15e6-4dc8-baaa-8f046f60cad8" containerID="323fef1d8bdfa94be4aeb04c244a1872a2d564555e5f4e9059d8ac8b534bc4b4" exitCode=0 Mar 09 19:02:19 crc kubenswrapper[4821]: I0309 19:02:19.483476 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" event={"ID":"38a9453b-15e6-4dc8-baaa-8f046f60cad8","Type":"ContainerDied","Data":"323fef1d8bdfa94be4aeb04c244a1872a2d564555e5f4e9059d8ac8b534bc4b4"} Mar 09 19:02:20 crc kubenswrapper[4821]: I0309 19:02:20.979554 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.113658 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-db-sync-config-data\") pod \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.113984 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cmxx\" (UniqueName: \"kubernetes.io/projected/38a9453b-15e6-4dc8-baaa-8f046f60cad8-kube-api-access-8cmxx\") pod \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.114145 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-config-data\") pod \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.114469 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-combined-ca-bundle\") pod \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\" (UID: \"38a9453b-15e6-4dc8-baaa-8f046f60cad8\") " Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.132723 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "38a9453b-15e6-4dc8-baaa-8f046f60cad8" (UID: "38a9453b-15e6-4dc8-baaa-8f046f60cad8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.148526 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a9453b-15e6-4dc8-baaa-8f046f60cad8-kube-api-access-8cmxx" (OuterVolumeSpecName: "kube-api-access-8cmxx") pod "38a9453b-15e6-4dc8-baaa-8f046f60cad8" (UID: "38a9453b-15e6-4dc8-baaa-8f046f60cad8"). InnerVolumeSpecName "kube-api-access-8cmxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.163871 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38a9453b-15e6-4dc8-baaa-8f046f60cad8" (UID: "38a9453b-15e6-4dc8-baaa-8f046f60cad8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.175021 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-config-data" (OuterVolumeSpecName: "config-data") pod "38a9453b-15e6-4dc8-baaa-8f046f60cad8" (UID: "38a9453b-15e6-4dc8-baaa-8f046f60cad8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.219968 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.219999 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.220008 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cmxx\" (UniqueName: \"kubernetes.io/projected/38a9453b-15e6-4dc8-baaa-8f046f60cad8-kube-api-access-8cmxx\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.220020 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38a9453b-15e6-4dc8-baaa-8f046f60cad8-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.500796 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" event={"ID":"38a9453b-15e6-4dc8-baaa-8f046f60cad8","Type":"ContainerDied","Data":"b370f3196bacb16cc7367da7b0e1413257688fad7669752893d88a1f299ed8c1"} Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.500842 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b370f3196bacb16cc7367da7b0e1413257688fad7669752893d88a1f299ed8c1" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.500867 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.503242 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhn27" event={"ID":"df2155e5-7524-47f7-8c00-80c2ab292588","Type":"ContainerStarted","Data":"b033ceedb83f3f32efec23cf39f7259a4716d3b391d64dc10ee212e9a96e329e"} Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.795242 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:21 crc kubenswrapper[4821]: E0309 19:02:21.795907 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a9453b-15e6-4dc8-baaa-8f046f60cad8" containerName="watcher-kuttl-db-sync" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.795928 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a9453b-15e6-4dc8-baaa-8f046f60cad8" containerName="watcher-kuttl-db-sync" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.796108 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a9453b-15e6-4dc8-baaa-8f046f60cad8" containerName="watcher-kuttl-db-sync" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.796746 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.799759 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.800236 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-n8f65" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.807827 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.809131 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.816001 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.824873 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.825167 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.825347 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.830350 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.862356 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.863294 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.868742 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.912752 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934357 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934392 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934423 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934449 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934481 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934498 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934518 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934539 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d63032-6b0e-408d-a39c-b069ffe922cf-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934556 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g996j\" (UniqueName: \"kubernetes.io/projected/9329e6dc-c286-4f58-b337-b89debdbcdce-kube-api-access-g996j\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934572 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06420d04-f54d-43c6-b0bb-b1f375758d54-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934593 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934608 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwln\" (UniqueName: \"kubernetes.io/projected/09d63032-6b0e-408d-a39c-b069ffe922cf-kube-api-access-5mwln\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934622 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934639 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb8z\" (UniqueName: \"kubernetes.io/projected/06420d04-f54d-43c6-b0bb-b1f375758d54-kube-api-access-9hb8z\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9329e6dc-c286-4f58-b337-b89debdbcdce-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:21 crc kubenswrapper[4821]: I0309 19:02:21.934692 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036079 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d63032-6b0e-408d-a39c-b069ffe922cf-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036135 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g996j\" (UniqueName: \"kubernetes.io/projected/9329e6dc-c286-4f58-b337-b89debdbcdce-kube-api-access-g996j\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036157 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06420d04-f54d-43c6-b0bb-b1f375758d54-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036199 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwln\" (UniqueName: \"kubernetes.io/projected/09d63032-6b0e-408d-a39c-b069ffe922cf-kube-api-access-5mwln\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036235 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036389 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hb8z\" (UniqueName: \"kubernetes.io/projected/06420d04-f54d-43c6-b0bb-b1f375758d54-kube-api-access-9hb8z\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036571 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d63032-6b0e-408d-a39c-b069ffe922cf-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.036674 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06420d04-f54d-43c6-b0bb-b1f375758d54-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037074 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9329e6dc-c286-4f58-b337-b89debdbcdce-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037098 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037140 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037158 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037182 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037207 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037241 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037259 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.037277 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.042370 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9329e6dc-c286-4f58-b337-b89debdbcdce-logs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.053152 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.054773 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.057849 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.062833 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hb8z\" (UniqueName: \"kubernetes.io/projected/06420d04-f54d-43c6-b0bb-b1f375758d54-kube-api-access-9hb8z\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.063198 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.063289 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.063569 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.064029 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.064778 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g996j\" (UniqueName: \"kubernetes.io/projected/9329e6dc-c286-4f58-b337-b89debdbcdce-kube-api-access-g996j\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.064828 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.065855 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.068082 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwln\" (UniqueName: \"kubernetes.io/projected/09d63032-6b0e-408d-a39c-b069ffe922cf-kube-api-access-5mwln\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.068802 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.155813 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.180967 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.187925 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.520181 4821 generic.go:334] "Generic (PLEG): container finished" podID="df2155e5-7524-47f7-8c00-80c2ab292588" containerID="b033ceedb83f3f32efec23cf39f7259a4716d3b391d64dc10ee212e9a96e329e" exitCode=0 Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.520218 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhn27" event={"ID":"df2155e5-7524-47f7-8c00-80c2ab292588","Type":"ContainerDied","Data":"b033ceedb83f3f32efec23cf39f7259a4716d3b391d64dc10ee212e9a96e329e"} Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.630924 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:22 crc kubenswrapper[4821]: W0309 19:02:22.631474 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06420d04_f54d_43c6_b0bb_b1f375758d54.slice/crio-395aec144f8097b0fe4f86ebfa211715fefcaee395b2965d8d8aa30c8d1912bb WatchSource:0}: Error finding container 395aec144f8097b0fe4f86ebfa211715fefcaee395b2965d8d8aa30c8d1912bb: Status 404 returned error can't find the container with id 395aec144f8097b0fe4f86ebfa211715fefcaee395b2965d8d8aa30c8d1912bb Mar 09 19:02:22 crc kubenswrapper[4821]: W0309 19:02:22.632859 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9329e6dc_c286_4f58_b337_b89debdbcdce.slice/crio-3e0194230e26a3b5b51d685e9ecb15fd8bbe885624e45bbead0572ffec46df71 WatchSource:0}: Error finding container 3e0194230e26a3b5b51d685e9ecb15fd8bbe885624e45bbead0572ffec46df71: Status 404 returned error can't find the container with id 3e0194230e26a3b5b51d685e9ecb15fd8bbe885624e45bbead0572ffec46df71 Mar 09 19:02:22 crc kubenswrapper[4821]: I0309 19:02:22.638879 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:23 crc kubenswrapper[4821]: I0309 19:02:23.832691 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"06420d04-f54d-43c6-b0bb-b1f375758d54","Type":"ContainerStarted","Data":"395aec144f8097b0fe4f86ebfa211715fefcaee395b2965d8d8aa30c8d1912bb"} Mar 09 19:02:23 crc kubenswrapper[4821]: I0309 19:02:23.849999 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9329e6dc-c286-4f58-b337-b89debdbcdce","Type":"ContainerStarted","Data":"3e0194230e26a3b5b51d685e9ecb15fd8bbe885624e45bbead0572ffec46df71"} Mar 09 19:02:23 crc kubenswrapper[4821]: W0309 19:02:23.850185 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09d63032_6b0e_408d_a39c_b069ffe922cf.slice/crio-7f261eed932325cb5c06c80c574a6efe593e03152c2d77578402f7b6c01617df WatchSource:0}: Error finding container 7f261eed932325cb5c06c80c574a6efe593e03152c2d77578402f7b6c01617df: Status 404 returned error can't find the container with id 7f261eed932325cb5c06c80c574a6efe593e03152c2d77578402f7b6c01617df Mar 09 19:02:23 crc kubenswrapper[4821]: I0309 19:02:23.860728 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.862857 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9329e6dc-c286-4f58-b337-b89debdbcdce","Type":"ContainerStarted","Data":"2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf"} Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.863196 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.863209 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9329e6dc-c286-4f58-b337-b89debdbcdce","Type":"ContainerStarted","Data":"7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698"} Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.864989 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09d63032-6b0e-408d-a39c-b069ffe922cf","Type":"ContainerStarted","Data":"43204d3f9fdd1bb93d4a963762384498396bc9f304d5e1f0e28ebacc45b5ed1d"} Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.865036 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09d63032-6b0e-408d-a39c-b069ffe922cf","Type":"ContainerStarted","Data":"7f261eed932325cb5c06c80c574a6efe593e03152c2d77578402f7b6c01617df"} Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.868393 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"06420d04-f54d-43c6-b0bb-b1f375758d54","Type":"ContainerStarted","Data":"f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2"} Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.872022 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vhn27" event={"ID":"df2155e5-7524-47f7-8c00-80c2ab292588","Type":"ContainerStarted","Data":"760238c880c17953ed13b8ce47fc274f4c20e74df5667e054749db458c3b5c39"} Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.889573 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.889552256 podStartE2EDuration="3.889552256s" podCreationTimestamp="2026-03-09 19:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:24.881710114 +0000 UTC m=+2282.043085960" watchObservedRunningTime="2026-03-09 19:02:24.889552256 +0000 UTC m=+2282.050928112" Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.903797 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=3.903777632 podStartE2EDuration="3.903777632s" podCreationTimestamp="2026-03-09 19:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:24.898769526 +0000 UTC m=+2282.060145382" watchObservedRunningTime="2026-03-09 19:02:24.903777632 +0000 UTC m=+2282.065153488" Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.941777 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vhn27" podStartSLOduration=3.532528209 podStartE2EDuration="11.941755692s" podCreationTimestamp="2026-03-09 19:02:13 +0000 UTC" firstStartedPulling="2026-03-09 19:02:15.45230713 +0000 UTC m=+2272.613682976" lastFinishedPulling="2026-03-09 19:02:23.861534603 +0000 UTC m=+2281.022910459" observedRunningTime="2026-03-09 19:02:24.921203034 +0000 UTC m=+2282.082578890" watchObservedRunningTime="2026-03-09 19:02:24.941755692 +0000 UTC m=+2282.103131548" Mar 09 19:02:24 crc kubenswrapper[4821]: I0309 19:02:24.948450 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.948436202 podStartE2EDuration="3.948436202s" podCreationTimestamp="2026-03-09 19:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:24.939407318 +0000 UTC m=+2282.100783214" watchObservedRunningTime="2026-03-09 19:02:24.948436202 +0000 UTC m=+2282.109812058" Mar 09 19:02:27 crc kubenswrapper[4821]: I0309 19:02:27.023174 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:27 crc kubenswrapper[4821]: I0309 19:02:27.157036 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:27 crc kubenswrapper[4821]: I0309 19:02:27.182088 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:29 crc kubenswrapper[4821]: I0309 19:02:29.914161 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:02:29 crc kubenswrapper[4821]: I0309 19:02:29.914547 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.156925 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.178583 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.181214 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.188126 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.197399 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.220047 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.720192 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.954583 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.962420 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.979603 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:32 crc kubenswrapper[4821]: I0309 19:02:32.982865 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:34 crc kubenswrapper[4821]: I0309 19:02:34.208435 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:34 crc kubenswrapper[4821]: I0309 19:02:34.209115 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:34 crc kubenswrapper[4821]: I0309 19:02:34.253185 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:35 crc kubenswrapper[4821]: I0309 19:02:35.022422 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vhn27" Mar 09 19:02:36 crc kubenswrapper[4821]: I0309 19:02:36.814040 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:36 crc kubenswrapper[4821]: I0309 19:02:36.814565 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-kuttl-api-log" containerID="cri-o://7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698" gracePeriod=30 Mar 09 19:02:36 crc kubenswrapper[4821]: I0309 19:02:36.814948 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-api" containerID="cri-o://2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf" gracePeriod=30 Mar 09 19:02:36 crc kubenswrapper[4821]: I0309 19:02:36.985589 4821 generic.go:334] "Generic (PLEG): container finished" podID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerID="7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698" exitCode=143 Mar 09 19:02:36 crc kubenswrapper[4821]: I0309 19:02:36.985661 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9329e6dc-c286-4f58-b337-b89debdbcdce","Type":"ContainerDied","Data":"7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698"} Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.212417 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vhn27"] Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.700200 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.186:9322/\": read tcp 10.217.0.2:39272->10.217.0.186:9322: read: connection reset by peer" Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.700443 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.186:9322/\": read tcp 10.217.0.2:39282->10.217.0.186:9322: read: connection reset by peer" Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.743515 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9cfmg"] Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.743754 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9cfmg" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="registry-server" containerID="cri-o://8ece91230ceff6800a46592ae0746cc8e6d9d1c02a56ec3a1732061a8870a57d" gracePeriod=2 Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.844258 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941388 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpkh4\" (UniqueName: \"kubernetes.io/projected/c6c5ce68-4841-475a-8f97-adceb433c645-kube-api-access-hpkh4\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941488 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-scripts\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941518 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-combined-ca-bundle\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941551 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-log-httpd\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941576 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-run-httpd\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941625 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-sg-core-conf-yaml\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941670 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-config-data\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.941688 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-ceilometer-tls-certs\") pod \"c6c5ce68-4841-475a-8f97-adceb433c645\" (UID: \"c6c5ce68-4841-475a-8f97-adceb433c645\") " Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.942119 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.942246 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.948341 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c5ce68-4841-475a-8f97-adceb433c645-kube-api-access-hpkh4" (OuterVolumeSpecName: "kube-api-access-hpkh4") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "kube-api-access-hpkh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:37 crc kubenswrapper[4821]: I0309 19:02:37.949773 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-scripts" (OuterVolumeSpecName: "scripts") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.001305 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.011495 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.013018 4821 generic.go:334] "Generic (PLEG): container finished" podID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerID="8ece91230ceff6800a46592ae0746cc8e6d9d1c02a56ec3a1732061a8870a57d" exitCode=0 Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.013071 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cfmg" event={"ID":"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5","Type":"ContainerDied","Data":"8ece91230ceff6800a46592ae0746cc8e6d9d1c02a56ec3a1732061a8870a57d"} Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.038400 4821 generic.go:334] "Generic (PLEG): container finished" podID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerID="2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf" exitCode=0 Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.038460 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9329e6dc-c286-4f58-b337-b89debdbcdce","Type":"ContainerDied","Data":"2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf"} Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.043207 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.043246 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.043260 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpkh4\" (UniqueName: \"kubernetes.io/projected/c6c5ce68-4841-475a-8f97-adceb433c645-kube-api-access-hpkh4\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.043275 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.043288 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.043300 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6c5ce68-4841-475a-8f97-adceb433c645-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.049991 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6c5ce68-4841-475a-8f97-adceb433c645" containerID="3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0" exitCode=137 Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.050068 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerDied","Data":"3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0"} Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.050097 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6c5ce68-4841-475a-8f97-adceb433c645","Type":"ContainerDied","Data":"14a0fe6d94637c15ff43a262ea4aaecfbbfc06c134d68108a695f87fc189d14c"} Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.050112 4821 scope.go:117] "RemoveContainer" containerID="3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.050161 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.051851 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9329e6dc_c286_4f58_b337_b89debdbcdce.slice/crio-conmon-2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9329e6dc_c286_4f58_b337_b89debdbcdce.slice/crio-2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c5ce68_4841_475a_8f97_adceb433c645.slice/crio-3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9329e6dc_c286_4f58_b337_b89debdbcdce.slice/crio-7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9329e6dc_c286_4f58_b337_b89debdbcdce.slice/crio-conmon-7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c5ce68_4841_475a_8f97_adceb433c645.slice/crio-conmon-3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c0b89a_7aa2_44d9_93b3_87c4a29220d5.slice/crio-conmon-8ece91230ceff6800a46592ae0746cc8e6d9d1c02a56ec3a1732061a8870a57d.scope\": RecentStats: unable to find data in memory cache]" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.081953 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.094420 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-config-data" (OuterVolumeSpecName: "config-data") pod "c6c5ce68-4841-475a-8f97-adceb433c645" (UID: "c6c5ce68-4841-475a-8f97-adceb433c645"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.104539 4821 scope.go:117] "RemoveContainer" containerID="ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.128312 4821 scope.go:117] "RemoveContainer" containerID="62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.146531 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.146556 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c5ce68-4841-475a-8f97-adceb433c645-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.152071 4821 scope.go:117] "RemoveContainer" containerID="84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.154892 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.181489 4821 scope.go:117] "RemoveContainer" containerID="3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.183507 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0\": container with ID starting with 3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0 not found: ID does not exist" containerID="3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.183558 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0"} err="failed to get container status \"3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0\": rpc error: code = NotFound desc = could not find container \"3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0\": container with ID starting with 3e1f3aa02e707b9794d87a5ea9b0ffd26e79fe6854f9256fca968e0982319fc0 not found: ID does not exist" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.183593 4821 scope.go:117] "RemoveContainer" containerID="ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.183921 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28\": container with ID starting with ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28 not found: ID does not exist" containerID="ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.183949 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28"} err="failed to get container status \"ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28\": rpc error: code = NotFound desc = could not find container \"ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28\": container with ID starting with ee28ca76b753a7afd75fe4c18f562af9948d0248d5722c6a8321745894c72f28 not found: ID does not exist" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.183967 4821 scope.go:117] "RemoveContainer" containerID="62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.184296 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a\": container with ID starting with 62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a not found: ID does not exist" containerID="62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.184407 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a"} err="failed to get container status \"62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a\": rpc error: code = NotFound desc = could not find container \"62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a\": container with ID starting with 62da62a5804b1153f95ab794157264de73760e168f2625c20fdbdd712d83895a not found: ID does not exist" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.184426 4821 scope.go:117] "RemoveContainer" containerID="84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.184685 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3\": container with ID starting with 84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3 not found: ID does not exist" containerID="84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.184710 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3"} err="failed to get container status \"84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3\": rpc error: code = NotFound desc = could not find container \"84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3\": container with ID starting with 84a2d3602d5521eb02a2cc260ce8f6c6d32572f7c5ea0f14814a315d4e78e1c3 not found: ID does not exist" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.263286 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.349816 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-config-data\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350069 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-custom-prometheus-ca\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350201 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-combined-ca-bundle\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350479 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9329e6dc-c286-4f58-b337-b89debdbcdce-logs\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350584 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-internal-tls-certs\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350689 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-public-tls-certs\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350781 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g996j\" (UniqueName: \"kubernetes.io/projected/9329e6dc-c286-4f58-b337-b89debdbcdce-kube-api-access-g996j\") pod \"9329e6dc-c286-4f58-b337-b89debdbcdce\" (UID: \"9329e6dc-c286-4f58-b337-b89debdbcdce\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.350925 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9329e6dc-c286-4f58-b337-b89debdbcdce-logs" (OuterVolumeSpecName: "logs") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.351389 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9329e6dc-c286-4f58-b337-b89debdbcdce-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.368350 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9329e6dc-c286-4f58-b337-b89debdbcdce-kube-api-access-g996j" (OuterVolumeSpecName: "kube-api-access-g996j") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "kube-api-access-g996j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.382574 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.406367 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.420935 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.422850 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.426526 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.428490 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.431428 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-config-data" (OuterVolumeSpecName: "config-data") pod "9329e6dc-c286-4f58-b337-b89debdbcdce" (UID: "9329e6dc-c286-4f58-b337-b89debdbcdce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453431 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453783 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="extract-utilities" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453795 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="extract-utilities" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453813 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="sg-core" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453819 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="sg-core" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453828 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-kuttl-api-log" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453833 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-kuttl-api-log" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453846 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="registry-server" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453853 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="registry-server" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453863 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-notification-agent" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453869 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-notification-agent" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453886 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="extract-content" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453892 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="extract-content" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453903 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="proxy-httpd" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453910 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="proxy-httpd" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453922 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-central-agent" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453928 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-central-agent" Mar 09 19:02:38 crc kubenswrapper[4821]: E0309 19:02:38.453936 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-api" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453942 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-api" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.453982 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-utilities\") pod \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454079 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" containerName="registry-server" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454091 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-central-agent" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454099 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-api" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454111 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="ceilometer-notification-agent" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454121 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="proxy-httpd" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454132 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" containerName="sg-core" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454142 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" containerName="watcher-kuttl-api-log" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454201 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff48p\" (UniqueName: \"kubernetes.io/projected/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-kube-api-access-ff48p\") pod \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454277 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-catalog-content\") pod \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\" (UID: \"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5\") " Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454766 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454786 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454800 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g996j\" (UniqueName: \"kubernetes.io/projected/9329e6dc-c286-4f58-b337-b89debdbcdce-kube-api-access-g996j\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454812 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454823 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454836 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9329e6dc-c286-4f58-b337-b89debdbcdce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.454873 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-utilities" (OuterVolumeSpecName: "utilities") pod "d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" (UID: "d2c0b89a-7aa2-44d9-93b3-87c4a29220d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.455577 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.457557 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-kube-api-access-ff48p" (OuterVolumeSpecName: "kube-api-access-ff48p") pod "d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" (UID: "d2c0b89a-7aa2-44d9-93b3-87c4a29220d5"). InnerVolumeSpecName "kube-api-access-ff48p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.467238 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.468167 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.468361 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.468544 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.540857 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" (UID: "d2c0b89a-7aa2-44d9-93b3-87c4a29220d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556091 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck7cz\" (UniqueName: \"kubernetes.io/projected/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-kube-api-access-ck7cz\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556174 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-log-httpd\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556194 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556210 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556232 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-scripts\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556297 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-config-data\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556340 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-run-httpd\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556364 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556443 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff48p\" (UniqueName: \"kubernetes.io/projected/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-kube-api-access-ff48p\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556457 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.556469 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657536 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-config-data\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657609 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-run-httpd\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657658 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657740 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck7cz\" (UniqueName: \"kubernetes.io/projected/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-kube-api-access-ck7cz\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657788 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-log-httpd\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657811 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657835 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.657876 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-scripts\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.658936 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-run-httpd\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.663043 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-log-httpd\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.663258 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-scripts\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.665944 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.666679 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-config-data\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.668448 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.669572 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.682175 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck7cz\" (UniqueName: \"kubernetes.io/projected/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-kube-api-access-ck7cz\") pod \"ceilometer-0\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:38 crc kubenswrapper[4821]: I0309 19:02:38.801233 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.063526 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9cfmg" event={"ID":"d2c0b89a-7aa2-44d9-93b3-87c4a29220d5","Type":"ContainerDied","Data":"907164eb1f90e046f6d5ff6ea066a235984f6864f3ec3e706838a1194bbb617b"} Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.063582 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9cfmg" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.063748 4821 scope.go:117] "RemoveContainer" containerID="8ece91230ceff6800a46592ae0746cc8e6d9d1c02a56ec3a1732061a8870a57d" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.067871 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"9329e6dc-c286-4f58-b337-b89debdbcdce","Type":"ContainerDied","Data":"3e0194230e26a3b5b51d685e9ecb15fd8bbe885624e45bbead0572ffec46df71"} Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.067948 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.097149 4821 scope.go:117] "RemoveContainer" containerID="e05218d399182bfe218d1d3439e4fee34992f38c580315f53cbc40c547a85c94" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.099632 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9cfmg"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.111652 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9cfmg"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.118310 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.127560 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.128394 4821 scope.go:117] "RemoveContainer" containerID="b86197b12d885778545e545a7a4a1e6f89f28d29755afef2608aaaf24bd8cf4e" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.137486 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.138942 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.142674 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.142898 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.143021 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.175436 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.190190 4821 scope.go:117] "RemoveContainer" containerID="2ae47c00e4afd9db0724e135ebb8e19692e8a4dcd51a2020ef3fec652ccd1adf" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.207200 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.217442 4821 scope.go:117] "RemoveContainer" containerID="7897d4780088b2889db3d0c7ff49d03bb41811c866302830d7c81e81b92c3698" Mar 09 19:02:39 crc kubenswrapper[4821]: W0309 19:02:39.223617 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf9dd5a3_bef8_4cdf_9e76_a5a9f4da1e30.slice/crio-721552c171be09be84ad4c6f89b6a7862c624be13dea47d483575f6baee2c676 WatchSource:0}: Error finding container 721552c171be09be84ad4c6f89b6a7862c624be13dea47d483575f6baee2c676: Status 404 returned error can't find the container with id 721552c171be09be84ad4c6f89b6a7862c624be13dea47d483575f6baee2c676 Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266524 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6n8\" (UniqueName: \"kubernetes.io/projected/97f343f1-8b12-4818-8083-5ef8a01e75df-kube-api-access-pj6n8\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266806 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f343f1-8b12-4818-8083-5ef8a01e75df-logs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266826 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266845 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266882 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.266914 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.368233 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6n8\" (UniqueName: \"kubernetes.io/projected/97f343f1-8b12-4818-8083-5ef8a01e75df-kube-api-access-pj6n8\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.368914 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.369921 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f343f1-8b12-4818-8083-5ef8a01e75df-logs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.370072 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.370189 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.370351 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.370487 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.370555 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f343f1-8b12-4818-8083-5ef8a01e75df-logs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.374233 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.375024 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.376885 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.379590 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.386546 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.386558 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6n8\" (UniqueName: \"kubernetes.io/projected/97f343f1-8b12-4818-8083-5ef8a01e75df-kube-api-access-pj6n8\") pod \"watcher-kuttl-api-0\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.503134 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.561886 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9329e6dc-c286-4f58-b337-b89debdbcdce" path="/var/lib/kubelet/pods/9329e6dc-c286-4f58-b337-b89debdbcdce/volumes" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.562858 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c5ce68-4841-475a-8f97-adceb433c645" path="/var/lib/kubelet/pods/c6c5ce68-4841-475a-8f97-adceb433c645/volumes" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.563744 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c0b89a-7aa2-44d9-93b3-87c4a29220d5" path="/var/lib/kubelet/pods/d2c0b89a-7aa2-44d9-93b3-87c4a29220d5/volumes" Mar 09 19:02:39 crc kubenswrapper[4821]: I0309 19:02:39.960900 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:39 crc kubenswrapper[4821]: W0309 19:02:39.962379 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97f343f1_8b12_4818_8083_5ef8a01e75df.slice/crio-abfa2570fc66fe87c3e044fc0a15e8e2ea83433c696f56ccaf402a040b488273 WatchSource:0}: Error finding container abfa2570fc66fe87c3e044fc0a15e8e2ea83433c696f56ccaf402a040b488273: Status 404 returned error can't find the container with id abfa2570fc66fe87c3e044fc0a15e8e2ea83433c696f56ccaf402a040b488273 Mar 09 19:02:40 crc kubenswrapper[4821]: I0309 19:02:40.077470 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"97f343f1-8b12-4818-8083-5ef8a01e75df","Type":"ContainerStarted","Data":"abfa2570fc66fe87c3e044fc0a15e8e2ea83433c696f56ccaf402a040b488273"} Mar 09 19:02:40 crc kubenswrapper[4821]: I0309 19:02:40.079037 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerStarted","Data":"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64"} Mar 09 19:02:40 crc kubenswrapper[4821]: I0309 19:02:40.079056 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerStarted","Data":"721552c171be09be84ad4c6f89b6a7862c624be13dea47d483575f6baee2c676"} Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.090421 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerStarted","Data":"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c"} Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.092338 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"97f343f1-8b12-4818-8083-5ef8a01e75df","Type":"ContainerStarted","Data":"8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27"} Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.092377 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"97f343f1-8b12-4818-8083-5ef8a01e75df","Type":"ContainerStarted","Data":"4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816"} Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.092657 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.120753 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.120728276 podStartE2EDuration="2.120728276s" podCreationTimestamp="2026-03-09 19:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:41.11353018 +0000 UTC m=+2298.274906046" watchObservedRunningTime="2026-03-09 19:02:41.120728276 +0000 UTC m=+2298.282104132" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.527138 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.573825 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.582854 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-q2gsp"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.670986 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.671240 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="09d63032-6b0e-408d-a39c-b069ffe922cf" containerName="watcher-decision-engine" containerID="cri-o://43204d3f9fdd1bb93d4a963762384498396bc9f304d5e1f0e28ebacc45b5ed1d" gracePeriod=30 Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.687449 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchera975-account-delete-gk2g5"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.688534 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.705916 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchera975-account-delete-gk2g5"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.764022 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.764218 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="06420d04-f54d-43c6-b0bb-b1f375758d54" containerName="watcher-applier" containerID="cri-o://f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2" gracePeriod=30 Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.814172 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzqcl\" (UniqueName: \"kubernetes.io/projected/385c0389-6ba4-4b5e-a571-b5fa39c50036-kube-api-access-rzqcl\") pod \"watchera975-account-delete-gk2g5\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.814246 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385c0389-6ba4-4b5e-a571-b5fa39c50036-operator-scripts\") pod \"watchera975-account-delete-gk2g5\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.915602 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385c0389-6ba4-4b5e-a571-b5fa39c50036-operator-scripts\") pod \"watchera975-account-delete-gk2g5\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.915722 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzqcl\" (UniqueName: \"kubernetes.io/projected/385c0389-6ba4-4b5e-a571-b5fa39c50036-kube-api-access-rzqcl\") pod \"watchera975-account-delete-gk2g5\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.916344 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385c0389-6ba4-4b5e-a571-b5fa39c50036-operator-scripts\") pod \"watchera975-account-delete-gk2g5\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:41 crc kubenswrapper[4821]: I0309 19:02:41.938227 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzqcl\" (UniqueName: \"kubernetes.io/projected/385c0389-6ba4-4b5e-a571-b5fa39c50036-kube-api-access-rzqcl\") pod \"watchera975-account-delete-gk2g5\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:42 crc kubenswrapper[4821]: I0309 19:02:42.002751 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:42 crc kubenswrapper[4821]: I0309 19:02:42.107146 4821 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-n8f65\" not found" Mar 09 19:02:42 crc kubenswrapper[4821]: I0309 19:02:42.107633 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerStarted","Data":"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5"} Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.165886 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.167615 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.169248 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.169274 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="06420d04-f54d-43c6-b0bb-b1f375758d54" containerName="watcher-applier" Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.221924 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.221975 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data podName:97f343f1-8b12-4818-8083-5ef8a01e75df nodeName:}" failed. No retries permitted until 2026-03-09 19:02:42.721958533 +0000 UTC m=+2299.883334389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data") pod "watcher-kuttl-api-0" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df") : secret "watcher-kuttl-api-config-data" not found Mar 09 19:02:42 crc kubenswrapper[4821]: I0309 19:02:42.533397 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchera975-account-delete-gk2g5"] Mar 09 19:02:42 crc kubenswrapper[4821]: W0309 19:02:42.537486 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod385c0389_6ba4_4b5e_a571_b5fa39c50036.slice/crio-2d8d404510b2e8533167e8d6a16cfa6aedc2ea22a8d00756794d543a41324af8 WatchSource:0}: Error finding container 2d8d404510b2e8533167e8d6a16cfa6aedc2ea22a8d00756794d543a41324af8: Status 404 returned error can't find the container with id 2d8d404510b2e8533167e8d6a16cfa6aedc2ea22a8d00756794d543a41324af8 Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.730060 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Mar 09 19:02:42 crc kubenswrapper[4821]: E0309 19:02:42.730375 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data podName:97f343f1-8b12-4818-8083-5ef8a01e75df nodeName:}" failed. No retries permitted until 2026-03-09 19:02:43.730357333 +0000 UTC m=+2300.891733209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data") pod "watcher-kuttl-api-0" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df") : secret "watcher-kuttl-api-config-data" not found Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.121874 4821 generic.go:334] "Generic (PLEG): container finished" podID="385c0389-6ba4-4b5e-a571-b5fa39c50036" containerID="20a46d1741bd4c964d79f701d204ed53424c6769488c29bfd121fa5c6c396cc0" exitCode=0 Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.121968 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.121974 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" event={"ID":"385c0389-6ba4-4b5e-a571-b5fa39c50036","Type":"ContainerDied","Data":"20a46d1741bd4c964d79f701d204ed53424c6769488c29bfd121fa5c6c396cc0"} Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.122025 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" event={"ID":"385c0389-6ba4-4b5e-a571-b5fa39c50036","Type":"ContainerStarted","Data":"2d8d404510b2e8533167e8d6a16cfa6aedc2ea22a8d00756794d543a41324af8"} Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.122093 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-kuttl-api-log" containerID="cri-o://4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816" gracePeriod=30 Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.122129 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" containerID="cri-o://8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27" gracePeriod=30 Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.129636 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.189:9322/\": EOF" Mar 09 19:02:43 crc kubenswrapper[4821]: I0309 19:02:43.563332 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a9453b-15e6-4dc8-baaa-8f046f60cad8" path="/var/lib/kubelet/pods/38a9453b-15e6-4dc8-baaa-8f046f60cad8/volumes" Mar 09 19:02:43 crc kubenswrapper[4821]: E0309 19:02:43.746067 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Mar 09 19:02:43 crc kubenswrapper[4821]: E0309 19:02:43.746404 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data podName:97f343f1-8b12-4818-8083-5ef8a01e75df nodeName:}" failed. No retries permitted until 2026-03-09 19:02:45.746389831 +0000 UTC m=+2302.907765687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data") pod "watcher-kuttl-api-0" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df") : secret "watcher-kuttl-api-config-data" not found Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.130828 4821 generic.go:334] "Generic (PLEG): container finished" podID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerID="4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816" exitCode=143 Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.130903 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"97f343f1-8b12-4818-8083-5ef8a01e75df","Type":"ContainerDied","Data":"4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816"} Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.505953 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.632462 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.691692 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.763969 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385c0389-6ba4-4b5e-a571-b5fa39c50036-operator-scripts\") pod \"385c0389-6ba4-4b5e-a571-b5fa39c50036\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.764127 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzqcl\" (UniqueName: \"kubernetes.io/projected/385c0389-6ba4-4b5e-a571-b5fa39c50036-kube-api-access-rzqcl\") pod \"385c0389-6ba4-4b5e-a571-b5fa39c50036\" (UID: \"385c0389-6ba4-4b5e-a571-b5fa39c50036\") " Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.765976 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/385c0389-6ba4-4b5e-a571-b5fa39c50036-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "385c0389-6ba4-4b5e-a571-b5fa39c50036" (UID: "385c0389-6ba4-4b5e-a571-b5fa39c50036"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.783857 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385c0389-6ba4-4b5e-a571-b5fa39c50036-kube-api-access-rzqcl" (OuterVolumeSpecName: "kube-api-access-rzqcl") pod "385c0389-6ba4-4b5e-a571-b5fa39c50036" (UID: "385c0389-6ba4-4b5e-a571-b5fa39c50036"). InnerVolumeSpecName "kube-api-access-rzqcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.865987 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/385c0389-6ba4-4b5e-a571-b5fa39c50036-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:44 crc kubenswrapper[4821]: I0309 19:02:44.866019 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzqcl\" (UniqueName: \"kubernetes.io/projected/385c0389-6ba4-4b5e-a571-b5fa39c50036-kube-api-access-rzqcl\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.144543 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerStarted","Data":"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60"} Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.145416 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.149897 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.149924 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchera975-account-delete-gk2g5" event={"ID":"385c0389-6ba4-4b5e-a571-b5fa39c50036","Type":"ContainerDied","Data":"2d8d404510b2e8533167e8d6a16cfa6aedc2ea22a8d00756794d543a41324af8"} Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.149966 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d8d404510b2e8533167e8d6a16cfa6aedc2ea22a8d00756794d543a41324af8" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.152166 4821 generic.go:334] "Generic (PLEG): container finished" podID="06420d04-f54d-43c6-b0bb-b1f375758d54" containerID="f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2" exitCode=0 Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.152204 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"06420d04-f54d-43c6-b0bb-b1f375758d54","Type":"ContainerDied","Data":"f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2"} Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.220833 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.244185 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.502362356 podStartE2EDuration="7.244161597s" podCreationTimestamp="2026-03-09 19:02:38 +0000 UTC" firstStartedPulling="2026-03-09 19:02:39.22595004 +0000 UTC m=+2296.387325896" lastFinishedPulling="2026-03-09 19:02:43.967749281 +0000 UTC m=+2301.129125137" observedRunningTime="2026-03-09 19:02:45.189579817 +0000 UTC m=+2302.350955673" watchObservedRunningTime="2026-03-09 19:02:45.244161597 +0000 UTC m=+2302.405537473" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.249989 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.189:9322/\": read tcp 10.217.0.2:42820->10.217.0.189:9322: read: connection reset by peer" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.250544 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.189:9322/\": dial tcp 10.217.0.189:9322: connect: connection refused" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.271502 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hb8z\" (UniqueName: \"kubernetes.io/projected/06420d04-f54d-43c6-b0bb-b1f375758d54-kube-api-access-9hb8z\") pod \"06420d04-f54d-43c6-b0bb-b1f375758d54\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.271568 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-config-data\") pod \"06420d04-f54d-43c6-b0bb-b1f375758d54\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.271662 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06420d04-f54d-43c6-b0bb-b1f375758d54-logs\") pod \"06420d04-f54d-43c6-b0bb-b1f375758d54\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.271738 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-combined-ca-bundle\") pod \"06420d04-f54d-43c6-b0bb-b1f375758d54\" (UID: \"06420d04-f54d-43c6-b0bb-b1f375758d54\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.272688 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06420d04-f54d-43c6-b0bb-b1f375758d54-logs" (OuterVolumeSpecName: "logs") pod "06420d04-f54d-43c6-b0bb-b1f375758d54" (UID: "06420d04-f54d-43c6-b0bb-b1f375758d54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.289490 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06420d04-f54d-43c6-b0bb-b1f375758d54-kube-api-access-9hb8z" (OuterVolumeSpecName: "kube-api-access-9hb8z") pod "06420d04-f54d-43c6-b0bb-b1f375758d54" (UID: "06420d04-f54d-43c6-b0bb-b1f375758d54"). InnerVolumeSpecName "kube-api-access-9hb8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.298336 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06420d04-f54d-43c6-b0bb-b1f375758d54" (UID: "06420d04-f54d-43c6-b0bb-b1f375758d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.328833 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-config-data" (OuterVolumeSpecName: "config-data") pod "06420d04-f54d-43c6-b0bb-b1f375758d54" (UID: "06420d04-f54d-43c6-b0bb-b1f375758d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.373879 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06420d04-f54d-43c6-b0bb-b1f375758d54-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.373912 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.373924 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hb8z\" (UniqueName: \"kubernetes.io/projected/06420d04-f54d-43c6-b0bb-b1f375758d54-kube-api-access-9hb8z\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.373932 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06420d04-f54d-43c6-b0bb-b1f375758d54-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.713914 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.782531 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-combined-ca-bundle\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.782601 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6n8\" (UniqueName: \"kubernetes.io/projected/97f343f1-8b12-4818-8083-5ef8a01e75df-kube-api-access-pj6n8\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.782638 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-custom-prometheus-ca\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.782713 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-internal-tls-certs\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.782754 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-public-tls-certs\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.782790 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.783106 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f343f1-8b12-4818-8083-5ef8a01e75df-logs\") pod \"97f343f1-8b12-4818-8083-5ef8a01e75df\" (UID: \"97f343f1-8b12-4818-8083-5ef8a01e75df\") " Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.783245 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f343f1-8b12-4818-8083-5ef8a01e75df-logs" (OuterVolumeSpecName: "logs") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.783580 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f343f1-8b12-4818-8083-5ef8a01e75df-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.787354 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f343f1-8b12-4818-8083-5ef8a01e75df-kube-api-access-pj6n8" (OuterVolumeSpecName: "kube-api-access-pj6n8") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "kube-api-access-pj6n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.808044 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.822076 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.824396 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.831517 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.842759 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data" (OuterVolumeSpecName: "config-data") pod "97f343f1-8b12-4818-8083-5ef8a01e75df" (UID: "97f343f1-8b12-4818-8083-5ef8a01e75df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.884945 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.884977 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj6n8\" (UniqueName: \"kubernetes.io/projected/97f343f1-8b12-4818-8083-5ef8a01e75df-kube-api-access-pj6n8\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.884988 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.884999 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.885007 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:45 crc kubenswrapper[4821]: I0309 19:02:45.885017 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f343f1-8b12-4818-8083-5ef8a01e75df-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.162899 4821 generic.go:334] "Generic (PLEG): container finished" podID="09d63032-6b0e-408d-a39c-b069ffe922cf" containerID="43204d3f9fdd1bb93d4a963762384498396bc9f304d5e1f0e28ebacc45b5ed1d" exitCode=0 Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.162954 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09d63032-6b0e-408d-a39c-b069ffe922cf","Type":"ContainerDied","Data":"43204d3f9fdd1bb93d4a963762384498396bc9f304d5e1f0e28ebacc45b5ed1d"} Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.166384 4821 generic.go:334] "Generic (PLEG): container finished" podID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerID="8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27" exitCode=0 Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.166470 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.166470 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"97f343f1-8b12-4818-8083-5ef8a01e75df","Type":"ContainerDied","Data":"8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27"} Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.166584 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"97f343f1-8b12-4818-8083-5ef8a01e75df","Type":"ContainerDied","Data":"abfa2570fc66fe87c3e044fc0a15e8e2ea83433c696f56ccaf402a040b488273"} Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.166602 4821 scope.go:117] "RemoveContainer" containerID="8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.173169 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.173204 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"06420d04-f54d-43c6-b0bb-b1f375758d54","Type":"ContainerDied","Data":"395aec144f8097b0fe4f86ebfa211715fefcaee395b2965d8d8aa30c8d1912bb"} Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.173345 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-central-agent" containerID="cri-o://0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" gracePeriod=30 Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.173610 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="sg-core" containerID="cri-o://b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" gracePeriod=30 Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.173645 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-notification-agent" containerID="cri-o://e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" gracePeriod=30 Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.173765 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="proxy-httpd" containerID="cri-o://9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" gracePeriod=30 Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.246399 4821 scope.go:117] "RemoveContainer" containerID="4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.264415 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.271623 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.274917 4821 scope.go:117] "RemoveContainer" containerID="8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27" Mar 09 19:02:46 crc kubenswrapper[4821]: E0309 19:02:46.276810 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27\": container with ID starting with 8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27 not found: ID does not exist" containerID="8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.276847 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27"} err="failed to get container status \"8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27\": rpc error: code = NotFound desc = could not find container \"8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27\": container with ID starting with 8dc306490480e8511fca5d9d5e21abbba8e9eedbfb1abbe747d8f955287fbb27 not found: ID does not exist" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.276869 4821 scope.go:117] "RemoveContainer" containerID="4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816" Mar 09 19:02:46 crc kubenswrapper[4821]: E0309 19:02:46.279826 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816\": container with ID starting with 4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816 not found: ID does not exist" containerID="4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.279865 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816"} err="failed to get container status \"4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816\": rpc error: code = NotFound desc = could not find container \"4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816\": container with ID starting with 4bc48bdf50830c20b9c72c31496e300c68511fbdc0cdea6b5a7baa7d8732a816 not found: ID does not exist" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.279883 4821 scope.go:117] "RemoveContainer" containerID="f68a15cb645347423ce1d7dfc43a077df60d33147e98c4ca990eae041aba14e2" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.281242 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.308889 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.412005 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.515981 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-combined-ca-bundle\") pod \"09d63032-6b0e-408d-a39c-b069ffe922cf\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.516479 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d63032-6b0e-408d-a39c-b069ffe922cf-logs\") pod \"09d63032-6b0e-408d-a39c-b069ffe922cf\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.516518 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-custom-prometheus-ca\") pod \"09d63032-6b0e-408d-a39c-b069ffe922cf\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.516546 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mwln\" (UniqueName: \"kubernetes.io/projected/09d63032-6b0e-408d-a39c-b069ffe922cf-kube-api-access-5mwln\") pod \"09d63032-6b0e-408d-a39c-b069ffe922cf\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.516645 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-config-data\") pod \"09d63032-6b0e-408d-a39c-b069ffe922cf\" (UID: \"09d63032-6b0e-408d-a39c-b069ffe922cf\") " Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.517933 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09d63032-6b0e-408d-a39c-b069ffe922cf-logs" (OuterVolumeSpecName: "logs") pod "09d63032-6b0e-408d-a39c-b069ffe922cf" (UID: "09d63032-6b0e-408d-a39c-b069ffe922cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.522852 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d63032-6b0e-408d-a39c-b069ffe922cf-kube-api-access-5mwln" (OuterVolumeSpecName: "kube-api-access-5mwln") pod "09d63032-6b0e-408d-a39c-b069ffe922cf" (UID: "09d63032-6b0e-408d-a39c-b069ffe922cf"). InnerVolumeSpecName "kube-api-access-5mwln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.542990 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "09d63032-6b0e-408d-a39c-b069ffe922cf" (UID: "09d63032-6b0e-408d-a39c-b069ffe922cf"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.544869 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09d63032-6b0e-408d-a39c-b069ffe922cf" (UID: "09d63032-6b0e-408d-a39c-b069ffe922cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.564911 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-config-data" (OuterVolumeSpecName: "config-data") pod "09d63032-6b0e-408d-a39c-b069ffe922cf" (UID: "09d63032-6b0e-408d-a39c-b069ffe922cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.620590 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.620632 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.620651 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09d63032-6b0e-408d-a39c-b069ffe922cf-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.620663 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/09d63032-6b0e-408d-a39c-b069ffe922cf-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.620676 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mwln\" (UniqueName: \"kubernetes.io/projected/09d63032-6b0e-408d-a39c-b069ffe922cf-kube-api-access-5mwln\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.747805 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jndn9"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.761174 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jndn9"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.770631 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchera975-account-delete-gk2g5"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.777203 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-a975-account-create-update-hdfhw"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.785810 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchera975-account-delete-gk2g5"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.793201 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-a975-account-create-update-hdfhw"] Mar 09 19:02:46 crc kubenswrapper[4821]: I0309 19:02:46.934019 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031001 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-sg-core-conf-yaml\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031050 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-scripts\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031126 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-log-httpd\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031175 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-run-httpd\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031227 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck7cz\" (UniqueName: \"kubernetes.io/projected/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-kube-api-access-ck7cz\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031253 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-config-data\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031315 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-ceilometer-tls-certs\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031362 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-combined-ca-bundle\") pod \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\" (UID: \"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30\") " Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031541 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.031557 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.032017 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.032041 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.034717 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-kube-api-access-ck7cz" (OuterVolumeSpecName: "kube-api-access-ck7cz") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "kube-api-access-ck7cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.035226 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-scripts" (OuterVolumeSpecName: "scripts") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.052460 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.084981 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.107914 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.120431 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-config-data" (OuterVolumeSpecName: "config-data") pod "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" (UID: "bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.133860 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.133920 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.133947 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.133972 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.133995 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck7cz\" (UniqueName: \"kubernetes.io/projected/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-kube-api-access-ck7cz\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.134021 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.184707 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"09d63032-6b0e-408d-a39c-b069ffe922cf","Type":"ContainerDied","Data":"7f261eed932325cb5c06c80c574a6efe593e03152c2d77578402f7b6c01617df"} Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.184764 4821 scope.go:117] "RemoveContainer" containerID="43204d3f9fdd1bb93d4a963762384498396bc9f304d5e1f0e28ebacc45b5ed1d" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.185726 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190192 4821 generic.go:334] "Generic (PLEG): container finished" podID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" exitCode=0 Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190220 4821 generic.go:334] "Generic (PLEG): container finished" podID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" exitCode=2 Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190221 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerDied","Data":"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60"} Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190246 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190252 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerDied","Data":"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5"} Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190364 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerDied","Data":"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c"} Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190231 4821 generic.go:334] "Generic (PLEG): container finished" podID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" exitCode=0 Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190392 4821 generic.go:334] "Generic (PLEG): container finished" podID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" exitCode=0 Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190409 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerDied","Data":"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64"} Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.190425 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30","Type":"ContainerDied","Data":"721552c171be09be84ad4c6f89b6a7862c624be13dea47d483575f6baee2c676"} Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.222767 4821 scope.go:117] "RemoveContainer" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.231993 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.249789 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.260376 4821 scope.go:117] "RemoveContainer" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.264866 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.274016 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.299787 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301275 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="sg-core" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301295 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="sg-core" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301307 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06420d04-f54d-43c6-b0bb-b1f375758d54" containerName="watcher-applier" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301313 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="06420d04-f54d-43c6-b0bb-b1f375758d54" containerName="watcher-applier" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301335 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="proxy-httpd" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301341 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="proxy-httpd" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301354 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-notification-agent" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301360 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-notification-agent" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301373 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-kuttl-api-log" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301380 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-kuttl-api-log" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301392 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-central-agent" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301408 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-central-agent" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301417 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d63032-6b0e-408d-a39c-b069ffe922cf" containerName="watcher-decision-engine" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301423 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d63032-6b0e-408d-a39c-b069ffe922cf" containerName="watcher-decision-engine" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301435 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301441 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.301457 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385c0389-6ba4-4b5e-a571-b5fa39c50036" containerName="mariadb-account-delete" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301465 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="385c0389-6ba4-4b5e-a571-b5fa39c50036" containerName="mariadb-account-delete" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301601 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-kuttl-api-log" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301617 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="proxy-httpd" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301632 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" containerName="watcher-api" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301642 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-notification-agent" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301649 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="sg-core" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301658 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" containerName="ceilometer-central-agent" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301665 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="385c0389-6ba4-4b5e-a571-b5fa39c50036" containerName="mariadb-account-delete" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301673 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d63032-6b0e-408d-a39c-b069ffe922cf" containerName="watcher-decision-engine" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.301684 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="06420d04-f54d-43c6-b0bb-b1f375758d54" containerName="watcher-applier" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.305643 4821 scope.go:117] "RemoveContainer" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.306664 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.309157 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.309391 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.309308 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.310894 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.358497 4821 scope.go:117] "RemoveContainer" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.391300 4821 scope.go:117] "RemoveContainer" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.394697 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": container with ID starting with 9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60 not found: ID does not exist" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.394735 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60"} err="failed to get container status \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": rpc error: code = NotFound desc = could not find container \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": container with ID starting with 9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.394757 4821 scope.go:117] "RemoveContainer" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.399904 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": container with ID starting with b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5 not found: ID does not exist" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.399944 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5"} err="failed to get container status \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": rpc error: code = NotFound desc = could not find container \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": container with ID starting with b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.399975 4821 scope.go:117] "RemoveContainer" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.405676 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": container with ID starting with e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c not found: ID does not exist" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.405719 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c"} err="failed to get container status \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": rpc error: code = NotFound desc = could not find container \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": container with ID starting with e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.405746 4821 scope.go:117] "RemoveContainer" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" Mar 09 19:02:47 crc kubenswrapper[4821]: E0309 19:02:47.409361 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": container with ID starting with 0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64 not found: ID does not exist" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.409440 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64"} err="failed to get container status \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": rpc error: code = NotFound desc = could not find container \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": container with ID starting with 0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.409472 4821 scope.go:117] "RemoveContainer" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.412587 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60"} err="failed to get container status \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": rpc error: code = NotFound desc = could not find container \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": container with ID starting with 9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.412628 4821 scope.go:117] "RemoveContainer" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.416730 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5"} err="failed to get container status \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": rpc error: code = NotFound desc = could not find container \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": container with ID starting with b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.416797 4821 scope.go:117] "RemoveContainer" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.420568 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c"} err="failed to get container status \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": rpc error: code = NotFound desc = could not find container \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": container with ID starting with e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.420602 4821 scope.go:117] "RemoveContainer" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.424605 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64"} err="failed to get container status \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": rpc error: code = NotFound desc = could not find container \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": container with ID starting with 0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.424649 4821 scope.go:117] "RemoveContainer" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.428652 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60"} err="failed to get container status \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": rpc error: code = NotFound desc = could not find container \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": container with ID starting with 9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.428725 4821 scope.go:117] "RemoveContainer" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.432619 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5"} err="failed to get container status \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": rpc error: code = NotFound desc = could not find container \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": container with ID starting with b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.432663 4821 scope.go:117] "RemoveContainer" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433033 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c"} err="failed to get container status \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": rpc error: code = NotFound desc = could not find container \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": container with ID starting with e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433054 4821 scope.go:117] "RemoveContainer" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433236 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64"} err="failed to get container status \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": rpc error: code = NotFound desc = could not find container \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": container with ID starting with 0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433260 4821 scope.go:117] "RemoveContainer" containerID="9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433503 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60"} err="failed to get container status \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": rpc error: code = NotFound desc = could not find container \"9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60\": container with ID starting with 9cf451782596856559f13711196993c20252aabb16ee52ba04c1e10b562dfa60 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433533 4821 scope.go:117] "RemoveContainer" containerID="b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433743 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5"} err="failed to get container status \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": rpc error: code = NotFound desc = could not find container \"b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5\": container with ID starting with b2573175d64875334a69d16af065bba3ddac4749407c634d2e3743bdea7ec5d5 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433762 4821 scope.go:117] "RemoveContainer" containerID="e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433946 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c"} err="failed to get container status \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": rpc error: code = NotFound desc = could not find container \"e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c\": container with ID starting with e0b94e16af85d7a4d1e1ece1ef2efccfe3d7a58588a6d2664cf08762418ee40c not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.433966 4821 scope.go:117] "RemoveContainer" containerID="0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.434174 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64"} err="failed to get container status \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": rpc error: code = NotFound desc = could not find container \"0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64\": container with ID starting with 0e4f3a36c218c60c1b1113f636f9c49ba66a52ea092899b272aa07a31fe0bc64 not found: ID does not exist" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.443611 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-scripts\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.443865 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-log-httpd\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.444019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.444113 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.444142 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.444161 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxks\" (UniqueName: \"kubernetes.io/projected/bd9d8599-271a-4e48-bf76-a80f0b416949-kube-api-access-qxxks\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.444209 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-run-httpd\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.444224 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.545863 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-scripts\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.545927 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-log-httpd\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.545988 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.546012 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.546027 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.546045 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxks\" (UniqueName: \"kubernetes.io/projected/bd9d8599-271a-4e48-bf76-a80f0b416949-kube-api-access-qxxks\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.546087 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-run-httpd\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.546101 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.546471 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-log-httpd\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.548484 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-run-httpd\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.550205 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.553104 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-scripts\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.553130 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.554130 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.554564 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.576777 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxks\" (UniqueName: \"kubernetes.io/projected/bd9d8599-271a-4e48-bf76-a80f0b416949-kube-api-access-qxxks\") pod \"ceilometer-0\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.579699 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06420d04-f54d-43c6-b0bb-b1f375758d54" path="/var/lib/kubelet/pods/06420d04-f54d-43c6-b0bb-b1f375758d54/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.580840 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09d63032-6b0e-408d-a39c-b069ffe922cf" path="/var/lib/kubelet/pods/09d63032-6b0e-408d-a39c-b069ffe922cf/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.581389 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281d4c83-a6b3-4a94-b7eb-d200497f1a9a" path="/var/lib/kubelet/pods/281d4c83-a6b3-4a94-b7eb-d200497f1a9a/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.582379 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385c0389-6ba4-4b5e-a571-b5fa39c50036" path="/var/lib/kubelet/pods/385c0389-6ba4-4b5e-a571-b5fa39c50036/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.582971 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f343f1-8b12-4818-8083-5ef8a01e75df" path="/var/lib/kubelet/pods/97f343f1-8b12-4818-8083-5ef8a01e75df/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.583940 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30" path="/var/lib/kubelet/pods/bf9dd5a3-bef8-4cdf-9e76-a5a9f4da1e30/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.585746 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0641355-eafb-410b-ad92-26836542589f" path="/var/lib/kubelet/pods/f0641355-eafb-410b-ad92-26836542589f/volumes" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.637708 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.864252 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-4jlhz"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.865454 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.872014 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-e081-account-create-update-l9lgx"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.874129 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.876991 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.878865 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-e081-account-create-update-l9lgx"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.890120 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4jlhz"] Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.952545 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-operator-scripts\") pod \"watcher-e081-account-create-update-l9lgx\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.952703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvqx\" (UniqueName: \"kubernetes.io/projected/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-kube-api-access-knvqx\") pod \"watcher-e081-account-create-update-l9lgx\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.952747 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90b56daf-1bad-435a-83d1-b7eea7444b00-operator-scripts\") pod \"watcher-db-create-4jlhz\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.952776 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4ln\" (UniqueName: \"kubernetes.io/projected/90b56daf-1bad-435a-83d1-b7eea7444b00-kube-api-access-kq4ln\") pod \"watcher-db-create-4jlhz\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:47 crc kubenswrapper[4821]: I0309 19:02:47.954362 4821 scope.go:117] "RemoveContainer" containerID="86d6ffb67b91499d7e98a6f5a064323b49e2a69542b7cff1ce38c9028a374ea5" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.054390 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvqx\" (UniqueName: \"kubernetes.io/projected/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-kube-api-access-knvqx\") pod \"watcher-e081-account-create-update-l9lgx\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.054442 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90b56daf-1bad-435a-83d1-b7eea7444b00-operator-scripts\") pod \"watcher-db-create-4jlhz\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.054473 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4ln\" (UniqueName: \"kubernetes.io/projected/90b56daf-1bad-435a-83d1-b7eea7444b00-kube-api-access-kq4ln\") pod \"watcher-db-create-4jlhz\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.054526 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-operator-scripts\") pod \"watcher-e081-account-create-update-l9lgx\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.055167 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-operator-scripts\") pod \"watcher-e081-account-create-update-l9lgx\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.055170 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90b56daf-1bad-435a-83d1-b7eea7444b00-operator-scripts\") pod \"watcher-db-create-4jlhz\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.072520 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4ln\" (UniqueName: \"kubernetes.io/projected/90b56daf-1bad-435a-83d1-b7eea7444b00-kube-api-access-kq4ln\") pod \"watcher-db-create-4jlhz\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.073823 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvqx\" (UniqueName: \"kubernetes.io/projected/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-kube-api-access-knvqx\") pod \"watcher-e081-account-create-update-l9lgx\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.138932 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.190277 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.196838 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerStarted","Data":"3df6625abeb04fd52711f649e3cad550ef9342f75783248251e8695865e27818"} Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.209733 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.717965 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4jlhz"] Mar 09 19:02:48 crc kubenswrapper[4821]: I0309 19:02:48.781776 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-e081-account-create-update-l9lgx"] Mar 09 19:02:49 crc kubenswrapper[4821]: I0309 19:02:49.205723 4821 generic.go:334] "Generic (PLEG): container finished" podID="74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" containerID="a3de3fa5b0abd0880578873ab6b250d3b791523ac72a82a68f13e1852d71dca6" exitCode=0 Mar 09 19:02:49 crc kubenswrapper[4821]: I0309 19:02:49.205771 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" event={"ID":"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0","Type":"ContainerDied","Data":"a3de3fa5b0abd0880578873ab6b250d3b791523ac72a82a68f13e1852d71dca6"} Mar 09 19:02:49 crc kubenswrapper[4821]: I0309 19:02:49.206045 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" event={"ID":"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0","Type":"ContainerStarted","Data":"403095dd71e0bd55fd67d62c574ba8a03460f66f83046c9c1122b0396b3c279c"} Mar 09 19:02:49 crc kubenswrapper[4821]: I0309 19:02:49.208848 4821 generic.go:334] "Generic (PLEG): container finished" podID="90b56daf-1bad-435a-83d1-b7eea7444b00" containerID="8fffa671f45df4eb87ecc5da2cf25342ee1a6379c1ad1ede7b8730ead3567da2" exitCode=0 Mar 09 19:02:49 crc kubenswrapper[4821]: I0309 19:02:49.208900 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-4jlhz" event={"ID":"90b56daf-1bad-435a-83d1-b7eea7444b00","Type":"ContainerDied","Data":"8fffa671f45df4eb87ecc5da2cf25342ee1a6379c1ad1ede7b8730ead3567da2"} Mar 09 19:02:49 crc kubenswrapper[4821]: I0309 19:02:49.208931 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-4jlhz" event={"ID":"90b56daf-1bad-435a-83d1-b7eea7444b00","Type":"ContainerStarted","Data":"83930039acab0bdefd4042165618f78da834a01aa10a7531f69803f05ffac205"} Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.218059 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerStarted","Data":"1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233"} Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.701601 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.771999 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.818363 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90b56daf-1bad-435a-83d1-b7eea7444b00-operator-scripts\") pod \"90b56daf-1bad-435a-83d1-b7eea7444b00\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.818440 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4ln\" (UniqueName: \"kubernetes.io/projected/90b56daf-1bad-435a-83d1-b7eea7444b00-kube-api-access-kq4ln\") pod \"90b56daf-1bad-435a-83d1-b7eea7444b00\" (UID: \"90b56daf-1bad-435a-83d1-b7eea7444b00\") " Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.819665 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b56daf-1bad-435a-83d1-b7eea7444b00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90b56daf-1bad-435a-83d1-b7eea7444b00" (UID: "90b56daf-1bad-435a-83d1-b7eea7444b00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.824089 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b56daf-1bad-435a-83d1-b7eea7444b00-kube-api-access-kq4ln" (OuterVolumeSpecName: "kube-api-access-kq4ln") pod "90b56daf-1bad-435a-83d1-b7eea7444b00" (UID: "90b56daf-1bad-435a-83d1-b7eea7444b00"). InnerVolumeSpecName "kube-api-access-kq4ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.919477 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knvqx\" (UniqueName: \"kubernetes.io/projected/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-kube-api-access-knvqx\") pod \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.919707 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-operator-scripts\") pod \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\" (UID: \"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0\") " Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.920178 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90b56daf-1bad-435a-83d1-b7eea7444b00-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.920308 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq4ln\" (UniqueName: \"kubernetes.io/projected/90b56daf-1bad-435a-83d1-b7eea7444b00-kube-api-access-kq4ln\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.920601 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" (UID: "74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:02:50 crc kubenswrapper[4821]: I0309 19:02:50.922614 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-kube-api-access-knvqx" (OuterVolumeSpecName: "kube-api-access-knvqx") pod "74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" (UID: "74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0"). InnerVolumeSpecName "kube-api-access-knvqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.054567 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knvqx\" (UniqueName: \"kubernetes.io/projected/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-kube-api-access-knvqx\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.054817 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.225629 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-4jlhz" event={"ID":"90b56daf-1bad-435a-83d1-b7eea7444b00","Type":"ContainerDied","Data":"83930039acab0bdefd4042165618f78da834a01aa10a7531f69803f05ffac205"} Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.225677 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83930039acab0bdefd4042165618f78da834a01aa10a7531f69803f05ffac205" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.225658 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-4jlhz" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.227102 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" event={"ID":"74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0","Type":"ContainerDied","Data":"403095dd71e0bd55fd67d62c574ba8a03460f66f83046c9c1122b0396b3c279c"} Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.227140 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403095dd71e0bd55fd67d62c574ba8a03460f66f83046c9c1122b0396b3c279c" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.227193 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-e081-account-create-update-l9lgx" Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.230341 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerStarted","Data":"350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d"} Mar 09 19:02:51 crc kubenswrapper[4821]: I0309 19:02:51.230384 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerStarted","Data":"dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076"} Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.196328 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg"] Mar 09 19:02:53 crc kubenswrapper[4821]: E0309 19:02:53.197032 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b56daf-1bad-435a-83d1-b7eea7444b00" containerName="mariadb-database-create" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.197043 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b56daf-1bad-435a-83d1-b7eea7444b00" containerName="mariadb-database-create" Mar 09 19:02:53 crc kubenswrapper[4821]: E0309 19:02:53.197056 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" containerName="mariadb-account-create-update" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.197063 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" containerName="mariadb-account-create-update" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.197228 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b56daf-1bad-435a-83d1-b7eea7444b00" containerName="mariadb-database-create" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.197237 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" containerName="mariadb-account-create-update" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.197770 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.199214 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.199539 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-mcsg7" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.207261 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg"] Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.250591 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerStarted","Data":"6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d"} Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.250794 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.276644 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.4399310650000001 podStartE2EDuration="6.276628048s" podCreationTimestamp="2026-03-09 19:02:47 +0000 UTC" firstStartedPulling="2026-03-09 19:02:48.142609637 +0000 UTC m=+2305.303985493" lastFinishedPulling="2026-03-09 19:02:52.97930662 +0000 UTC m=+2310.140682476" observedRunningTime="2026-03-09 19:02:53.270092151 +0000 UTC m=+2310.431468007" watchObservedRunningTime="2026-03-09 19:02:53.276628048 +0000 UTC m=+2310.438003904" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.391411 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.391503 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-config-data\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.391589 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.391622 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc57v\" (UniqueName: \"kubernetes.io/projected/f9dcd828-4752-4369-8404-2baa9d1d28e1-kube-api-access-xc57v\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.493430 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.493464 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc57v\" (UniqueName: \"kubernetes.io/projected/f9dcd828-4752-4369-8404-2baa9d1d28e1-kube-api-access-xc57v\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.493544 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.493577 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-config-data\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.498966 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.499056 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.504961 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-config-data\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.517884 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc57v\" (UniqueName: \"kubernetes.io/projected/f9dcd828-4752-4369-8404-2baa9d1d28e1-kube-api-access-xc57v\") pod \"watcher-kuttl-db-sync-b4xcg\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:53 crc kubenswrapper[4821]: I0309 19:02:53.811347 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:54 crc kubenswrapper[4821]: I0309 19:02:54.318548 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg"] Mar 09 19:02:54 crc kubenswrapper[4821]: W0309 19:02:54.336388 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9dcd828_4752_4369_8404_2baa9d1d28e1.slice/crio-2c5b4c63035b5d1c243ecee65b0f93e113d852e00683818eb357291b9d3fe7f2 WatchSource:0}: Error finding container 2c5b4c63035b5d1c243ecee65b0f93e113d852e00683818eb357291b9d3fe7f2: Status 404 returned error can't find the container with id 2c5b4c63035b5d1c243ecee65b0f93e113d852e00683818eb357291b9d3fe7f2 Mar 09 19:02:55 crc kubenswrapper[4821]: I0309 19:02:55.275278 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" event={"ID":"f9dcd828-4752-4369-8404-2baa9d1d28e1","Type":"ContainerStarted","Data":"6930d9264f7daf08da5ebe160cbeba30c0e39badcfa9a0070e5975bd1d936f96"} Mar 09 19:02:55 crc kubenswrapper[4821]: I0309 19:02:55.275575 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" event={"ID":"f9dcd828-4752-4369-8404-2baa9d1d28e1","Type":"ContainerStarted","Data":"2c5b4c63035b5d1c243ecee65b0f93e113d852e00683818eb357291b9d3fe7f2"} Mar 09 19:02:55 crc kubenswrapper[4821]: I0309 19:02:55.296344 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" podStartSLOduration=2.296309009 podStartE2EDuration="2.296309009s" podCreationTimestamp="2026-03-09 19:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:02:55.291963712 +0000 UTC m=+2312.453339568" watchObservedRunningTime="2026-03-09 19:02:55.296309009 +0000 UTC m=+2312.457684865" Mar 09 19:02:57 crc kubenswrapper[4821]: I0309 19:02:57.296367 4821 generic.go:334] "Generic (PLEG): container finished" podID="f9dcd828-4752-4369-8404-2baa9d1d28e1" containerID="6930d9264f7daf08da5ebe160cbeba30c0e39badcfa9a0070e5975bd1d936f96" exitCode=0 Mar 09 19:02:57 crc kubenswrapper[4821]: I0309 19:02:57.296450 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" event={"ID":"f9dcd828-4752-4369-8404-2baa9d1d28e1","Type":"ContainerDied","Data":"6930d9264f7daf08da5ebe160cbeba30c0e39badcfa9a0070e5975bd1d936f96"} Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.713825 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.872067 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc57v\" (UniqueName: \"kubernetes.io/projected/f9dcd828-4752-4369-8404-2baa9d1d28e1-kube-api-access-xc57v\") pod \"f9dcd828-4752-4369-8404-2baa9d1d28e1\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.872114 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-db-sync-config-data\") pod \"f9dcd828-4752-4369-8404-2baa9d1d28e1\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.872139 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-config-data\") pod \"f9dcd828-4752-4369-8404-2baa9d1d28e1\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.872390 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-combined-ca-bundle\") pod \"f9dcd828-4752-4369-8404-2baa9d1d28e1\" (UID: \"f9dcd828-4752-4369-8404-2baa9d1d28e1\") " Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.891387 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9dcd828-4752-4369-8404-2baa9d1d28e1-kube-api-access-xc57v" (OuterVolumeSpecName: "kube-api-access-xc57v") pod "f9dcd828-4752-4369-8404-2baa9d1d28e1" (UID: "f9dcd828-4752-4369-8404-2baa9d1d28e1"). InnerVolumeSpecName "kube-api-access-xc57v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.907829 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f9dcd828-4752-4369-8404-2baa9d1d28e1" (UID: "f9dcd828-4752-4369-8404-2baa9d1d28e1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.948819 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-config-data" (OuterVolumeSpecName: "config-data") pod "f9dcd828-4752-4369-8404-2baa9d1d28e1" (UID: "f9dcd828-4752-4369-8404-2baa9d1d28e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.953582 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9dcd828-4752-4369-8404-2baa9d1d28e1" (UID: "f9dcd828-4752-4369-8404-2baa9d1d28e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.974577 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.974625 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc57v\" (UniqueName: \"kubernetes.io/projected/f9dcd828-4752-4369-8404-2baa9d1d28e1-kube-api-access-xc57v\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.974637 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:58 crc kubenswrapper[4821]: I0309 19:02:58.974647 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9dcd828-4752-4369-8404-2baa9d1d28e1-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.316903 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" event={"ID":"f9dcd828-4752-4369-8404-2baa9d1d28e1","Type":"ContainerDied","Data":"2c5b4c63035b5d1c243ecee65b0f93e113d852e00683818eb357291b9d3fe7f2"} Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.317358 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5b4c63035b5d1c243ecee65b0f93e113d852e00683818eb357291b9d3fe7f2" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.317181 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.625797 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:59 crc kubenswrapper[4821]: E0309 19:02:59.626099 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9dcd828-4752-4369-8404-2baa9d1d28e1" containerName="watcher-kuttl-db-sync" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.626116 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9dcd828-4752-4369-8404-2baa9d1d28e1" containerName="watcher-kuttl-db-sync" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.626252 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9dcd828-4752-4369-8404-2baa9d1d28e1" containerName="watcher-kuttl-db-sync" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.626771 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.629168 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.636663 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.637920 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.643484 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.645727 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-mcsg7" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.646345 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.646601 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.646830 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.656888 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.740369 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.741473 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.744077 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.750117 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799349 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799390 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl8v5\" (UniqueName: \"kubernetes.io/projected/532b607d-3792-4170-9eb9-626dd085ef32-kube-api-access-cl8v5\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799461 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799522 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799568 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799608 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f738eca5-2be8-455a-9c85-f2cae97d41e5-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799637 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/532b607d-3792-4170-9eb9-626dd085ef32-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799721 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799758 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799817 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799840 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799860 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/218746cc-be8f-46c9-8c0d-c2256ad6b705-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799917 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799952 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvqb\" (UniqueName: \"kubernetes.io/projected/f738eca5-2be8-455a-9c85-f2cae97d41e5-kube-api-access-bdvqb\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.799973 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55vjp\" (UniqueName: \"kubernetes.io/projected/218746cc-be8f-46c9-8c0d-c2256ad6b705-kube-api-access-55vjp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900564 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900637 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900660 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900682 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900700 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/218746cc-be8f-46c9-8c0d-c2256ad6b705-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900720 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900739 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvqb\" (UniqueName: \"kubernetes.io/projected/f738eca5-2be8-455a-9c85-f2cae97d41e5-kube-api-access-bdvqb\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900755 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55vjp\" (UniqueName: \"kubernetes.io/projected/218746cc-be8f-46c9-8c0d-c2256ad6b705-kube-api-access-55vjp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900803 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900821 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl8v5\" (UniqueName: \"kubernetes.io/projected/532b607d-3792-4170-9eb9-626dd085ef32-kube-api-access-cl8v5\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900844 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900871 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900889 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900909 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f738eca5-2be8-455a-9c85-f2cae97d41e5-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.900924 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/532b607d-3792-4170-9eb9-626dd085ef32-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.901224 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/532b607d-3792-4170-9eb9-626dd085ef32-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.902423 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f738eca5-2be8-455a-9c85-f2cae97d41e5-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.905225 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.905558 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.905709 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/218746cc-be8f-46c9-8c0d-c2256ad6b705-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.906576 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.908719 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.921525 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.921585 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.921640 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.921938 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.922180 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.922309 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6f924e73c0d96463d23d74c00c469a04a44dafcfce63f7df228acf99a8d74b6"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.922393 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://f6f924e73c0d96463d23d74c00c469a04a44dafcfce63f7df228acf99a8d74b6" gracePeriod=600 Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.922540 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.922807 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.923598 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.924608 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.926975 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl8v5\" (UniqueName: \"kubernetes.io/projected/532b607d-3792-4170-9eb9-626dd085ef32-kube-api-access-cl8v5\") pod \"watcher-kuttl-applier-0\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.928286 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvqb\" (UniqueName: \"kubernetes.io/projected/f738eca5-2be8-455a-9c85-f2cae97d41e5-kube-api-access-bdvqb\") pod \"watcher-kuttl-api-0\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.929731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55vjp\" (UniqueName: \"kubernetes.io/projected/218746cc-be8f-46c9-8c0d-c2256ad6b705-kube-api-access-55vjp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.941113 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:02:59 crc kubenswrapper[4821]: I0309 19:02:59.958482 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.064767 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.330094 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="f6f924e73c0d96463d23d74c00c469a04a44dafcfce63f7df228acf99a8d74b6" exitCode=0 Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.330123 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"f6f924e73c0d96463d23d74c00c469a04a44dafcfce63f7df228acf99a8d74b6"} Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.330363 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8"} Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.330380 4821 scope.go:117] "RemoveContainer" containerID="26f65c4ab4c3d28a2350762ba3a27988c93777a5c666f215bce4b187591f0d4c" Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.457339 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:00 crc kubenswrapper[4821]: W0309 19:03:00.461436 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod218746cc_be8f_46c9_8c0d_c2256ad6b705.slice/crio-9dfc79e37fd8bf9d972fac6927b1f0b2467e237032be1d221daa768e7e3d70d6 WatchSource:0}: Error finding container 9dfc79e37fd8bf9d972fac6927b1f0b2467e237032be1d221daa768e7e3d70d6: Status 404 returned error can't find the container with id 9dfc79e37fd8bf9d972fac6927b1f0b2467e237032be1d221daa768e7e3d70d6 Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.549137 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:00 crc kubenswrapper[4821]: I0309 19:03:00.653944 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:00 crc kubenswrapper[4821]: W0309 19:03:00.669371 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532b607d_3792_4170_9eb9_626dd085ef32.slice/crio-9829e1f92ef69a6394d63a88c69e304cdc92c99e78aad1deb005d6c5b1a7b937 WatchSource:0}: Error finding container 9829e1f92ef69a6394d63a88c69e304cdc92c99e78aad1deb005d6c5b1a7b937: Status 404 returned error can't find the container with id 9829e1f92ef69a6394d63a88c69e304cdc92c99e78aad1deb005d6c5b1a7b937 Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.341402 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f738eca5-2be8-455a-9c85-f2cae97d41e5","Type":"ContainerStarted","Data":"61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.341690 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f738eca5-2be8-455a-9c85-f2cae97d41e5","Type":"ContainerStarted","Data":"7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.341862 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.341891 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f738eca5-2be8-455a-9c85-f2cae97d41e5","Type":"ContainerStarted","Data":"05452d7b27bb12e30afd8953b50197e960812ad38aef35ec75642b8721ac26cd"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.343803 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"218746cc-be8f-46c9-8c0d-c2256ad6b705","Type":"ContainerStarted","Data":"f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.343827 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"218746cc-be8f-46c9-8c0d-c2256ad6b705","Type":"ContainerStarted","Data":"9dfc79e37fd8bf9d972fac6927b1f0b2467e237032be1d221daa768e7e3d70d6"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.345625 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"532b607d-3792-4170-9eb9-626dd085ef32","Type":"ContainerStarted","Data":"d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.345676 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"532b607d-3792-4170-9eb9-626dd085ef32","Type":"ContainerStarted","Data":"9829e1f92ef69a6394d63a88c69e304cdc92c99e78aad1deb005d6c5b1a7b937"} Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.369929 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.369907038 podStartE2EDuration="2.369907038s" podCreationTimestamp="2026-03-09 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:01.364058659 +0000 UTC m=+2318.525434515" watchObservedRunningTime="2026-03-09 19:03:01.369907038 +0000 UTC m=+2318.531282884" Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.382385 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.382361586 podStartE2EDuration="2.382361586s" podCreationTimestamp="2026-03-09 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:01.380157836 +0000 UTC m=+2318.541533692" watchObservedRunningTime="2026-03-09 19:03:01.382361586 +0000 UTC m=+2318.543737452" Mar 09 19:03:01 crc kubenswrapper[4821]: I0309 19:03:01.415592 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.415573916 podStartE2EDuration="2.415573916s" podCreationTimestamp="2026-03-09 19:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:01.395905573 +0000 UTC m=+2318.557281429" watchObservedRunningTime="2026-03-09 19:03:01.415573916 +0000 UTC m=+2318.576949772" Mar 09 19:03:03 crc kubenswrapper[4821]: I0309 19:03:03.413912 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:04 crc kubenswrapper[4821]: I0309 19:03:04.958912 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:05 crc kubenswrapper[4821]: I0309 19:03:05.065563 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:09 crc kubenswrapper[4821]: I0309 19:03:09.942377 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:09 crc kubenswrapper[4821]: I0309 19:03:09.959285 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:09 crc kubenswrapper[4821]: I0309 19:03:09.970279 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:09 crc kubenswrapper[4821]: I0309 19:03:09.973241 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:10 crc kubenswrapper[4821]: I0309 19:03:10.065535 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:10 crc kubenswrapper[4821]: I0309 19:03:10.092120 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:10 crc kubenswrapper[4821]: I0309 19:03:10.431566 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:10 crc kubenswrapper[4821]: I0309 19:03:10.445894 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:10 crc kubenswrapper[4821]: I0309 19:03:10.454692 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:10 crc kubenswrapper[4821]: I0309 19:03:10.470403 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:12 crc kubenswrapper[4821]: I0309 19:03:12.646863 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:03:12 crc kubenswrapper[4821]: I0309 19:03:12.647702 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-central-agent" containerID="cri-o://1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233" gracePeriod=30 Mar 09 19:03:12 crc kubenswrapper[4821]: I0309 19:03:12.648450 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="proxy-httpd" containerID="cri-o://6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d" gracePeriod=30 Mar 09 19:03:12 crc kubenswrapper[4821]: I0309 19:03:12.648546 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="sg-core" containerID="cri-o://350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d" gracePeriod=30 Mar 09 19:03:12 crc kubenswrapper[4821]: I0309 19:03:12.648598 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-notification-agent" containerID="cri-o://dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076" gracePeriod=30 Mar 09 19:03:12 crc kubenswrapper[4821]: I0309 19:03:12.662543 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.191:3000/\": EOF" Mar 09 19:03:13 crc kubenswrapper[4821]: I0309 19:03:13.453151 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerID="6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d" exitCode=0 Mar 09 19:03:13 crc kubenswrapper[4821]: I0309 19:03:13.453464 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerID="350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d" exitCode=2 Mar 09 19:03:13 crc kubenswrapper[4821]: I0309 19:03:13.453477 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerID="1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233" exitCode=0 Mar 09 19:03:13 crc kubenswrapper[4821]: I0309 19:03:13.453190 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerDied","Data":"6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d"} Mar 09 19:03:13 crc kubenswrapper[4821]: I0309 19:03:13.453512 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerDied","Data":"350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d"} Mar 09 19:03:13 crc kubenswrapper[4821]: I0309 19:03:13.453529 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerDied","Data":"1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233"} Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.460497 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.462687 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerID="dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076" exitCode=0 Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.462724 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerDied","Data":"dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076"} Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.462747 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd9d8599-271a-4e48-bf76-a80f0b416949","Type":"ContainerDied","Data":"3df6625abeb04fd52711f649e3cad550ef9342f75783248251e8695865e27818"} Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.462763 4821 scope.go:117] "RemoveContainer" containerID="6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.482274 4821 scope.go:117] "RemoveContainer" containerID="350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485249 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-log-httpd\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485288 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-combined-ca-bundle\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485306 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485400 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-sg-core-conf-yaml\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485450 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-scripts\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485497 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxxks\" (UniqueName: \"kubernetes.io/projected/bd9d8599-271a-4e48-bf76-a80f0b416949-kube-api-access-qxxks\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485514 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-ceilometer-tls-certs\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485578 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-run-httpd\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.485878 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.486089 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.491692 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9d8599-271a-4e48-bf76-a80f0b416949-kube-api-access-qxxks" (OuterVolumeSpecName: "kube-api-access-qxxks") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "kube-api-access-qxxks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.520811 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-scripts" (OuterVolumeSpecName: "scripts") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.537722 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.538494 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.563041 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.586564 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data" (OuterVolumeSpecName: "config-data") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.586876 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data\") pod \"bd9d8599-271a-4e48-bf76-a80f0b416949\" (UID: \"bd9d8599-271a-4e48-bf76-a80f0b416949\") " Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587280 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587297 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587307 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxxks\" (UniqueName: \"kubernetes.io/projected/bd9d8599-271a-4e48-bf76-a80f0b416949-kube-api-access-qxxks\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587337 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587348 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587359 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd9d8599-271a-4e48-bf76-a80f0b416949-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587369 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:14 crc kubenswrapper[4821]: W0309 19:03:14.587623 4821 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bd9d8599-271a-4e48-bf76-a80f0b416949/volumes/kubernetes.io~secret/config-data Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.587641 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data" (OuterVolumeSpecName: "config-data") pod "bd9d8599-271a-4e48-bf76-a80f0b416949" (UID: "bd9d8599-271a-4e48-bf76-a80f0b416949"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.592611 4821 scope.go:117] "RemoveContainer" containerID="dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.609856 4821 scope.go:117] "RemoveContainer" containerID="1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.628707 4821 scope.go:117] "RemoveContainer" containerID="6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d" Mar 09 19:03:14 crc kubenswrapper[4821]: E0309 19:03:14.629102 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d\": container with ID starting with 6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d not found: ID does not exist" containerID="6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.629155 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d"} err="failed to get container status \"6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d\": rpc error: code = NotFound desc = could not find container \"6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d\": container with ID starting with 6c25b3f3a01eba4fd2db6046d4565ce4e66bb99450579f48b4f5c0b31b6d870d not found: ID does not exist" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.629190 4821 scope.go:117] "RemoveContainer" containerID="350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d" Mar 09 19:03:14 crc kubenswrapper[4821]: E0309 19:03:14.629567 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d\": container with ID starting with 350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d not found: ID does not exist" containerID="350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.629612 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d"} err="failed to get container status \"350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d\": rpc error: code = NotFound desc = could not find container \"350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d\": container with ID starting with 350d9810b84b15f86944ce20cc010800452e84fd2f1fe9f2222a1b41e470374d not found: ID does not exist" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.629639 4821 scope.go:117] "RemoveContainer" containerID="dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076" Mar 09 19:03:14 crc kubenswrapper[4821]: E0309 19:03:14.629910 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076\": container with ID starting with dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076 not found: ID does not exist" containerID="dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.629950 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076"} err="failed to get container status \"dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076\": rpc error: code = NotFound desc = could not find container \"dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076\": container with ID starting with dbe587374c89fec4bcaceb823481255234bc269c44d605917010f08affbec076 not found: ID does not exist" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.629973 4821 scope.go:117] "RemoveContainer" containerID="1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233" Mar 09 19:03:14 crc kubenswrapper[4821]: E0309 19:03:14.630796 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233\": container with ID starting with 1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233 not found: ID does not exist" containerID="1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.630823 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233"} err="failed to get container status \"1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233\": rpc error: code = NotFound desc = could not find container \"1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233\": container with ID starting with 1e2d4f30c2ee0f56954078d46590027087e308014b74b16bb59a27278095b233 not found: ID does not exist" Mar 09 19:03:14 crc kubenswrapper[4821]: I0309 19:03:14.688446 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9d8599-271a-4e48-bf76-a80f0b416949-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.471996 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.505536 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.514877 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.530361 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:03:15 crc kubenswrapper[4821]: E0309 19:03:15.530754 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="sg-core" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.530779 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="sg-core" Mar 09 19:03:15 crc kubenswrapper[4821]: E0309 19:03:15.530791 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="proxy-httpd" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.530800 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="proxy-httpd" Mar 09 19:03:15 crc kubenswrapper[4821]: E0309 19:03:15.530816 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-central-agent" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.530824 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-central-agent" Mar 09 19:03:15 crc kubenswrapper[4821]: E0309 19:03:15.530842 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-notification-agent" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.530847 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-notification-agent" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.531008 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="sg-core" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.531022 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-notification-agent" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.531032 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="proxy-httpd" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.531046 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" containerName="ceilometer-central-agent" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.532641 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.537398 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.537547 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.537726 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.562236 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9d8599-271a-4e48-bf76-a80f0b416949" path="/var/lib/kubelet/pods/bd9d8599-271a-4e48-bf76-a80f0b416949/volumes" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.564980 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610306 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-log-httpd\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610398 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610443 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnmt\" (UniqueName: \"kubernetes.io/projected/fe83642f-cf96-4961-97f5-cfa5c0369567-kube-api-access-rmnmt\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610476 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-config-data\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610526 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610798 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-scripts\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.610860 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-run-httpd\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.611015 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713180 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713259 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmnmt\" (UniqueName: \"kubernetes.io/projected/fe83642f-cf96-4961-97f5-cfa5c0369567-kube-api-access-rmnmt\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713291 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-config-data\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713353 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713405 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-scripts\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713427 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-run-httpd\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713482 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.713525 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-log-httpd\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.714221 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-log-httpd\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.715240 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-run-httpd\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.718517 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.719048 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.719301 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-scripts\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.722426 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-config-data\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.723545 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.732556 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmnmt\" (UniqueName: \"kubernetes.io/projected/fe83642f-cf96-4961-97f5-cfa5c0369567-kube-api-access-rmnmt\") pod \"ceilometer-0\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:15 crc kubenswrapper[4821]: I0309 19:03:15.849233 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:16 crc kubenswrapper[4821]: I0309 19:03:16.312581 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:03:16 crc kubenswrapper[4821]: I0309 19:03:16.480184 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerStarted","Data":"b8ef6634f73954af9c21ac855f581409b0f5db5fc923de21cb200991618eafee"} Mar 09 19:03:17 crc kubenswrapper[4821]: I0309 19:03:17.490043 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerStarted","Data":"49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84"} Mar 09 19:03:18 crc kubenswrapper[4821]: I0309 19:03:18.499987 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerStarted","Data":"f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba"} Mar 09 19:03:18 crc kubenswrapper[4821]: I0309 19:03:18.500560 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerStarted","Data":"b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133"} Mar 09 19:03:21 crc kubenswrapper[4821]: I0309 19:03:21.530570 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerStarted","Data":"68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802"} Mar 09 19:03:21 crc kubenswrapper[4821]: I0309 19:03:21.532197 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:21 crc kubenswrapper[4821]: I0309 19:03:21.560148 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.41889866 podStartE2EDuration="6.560128303s" podCreationTimestamp="2026-03-09 19:03:15 +0000 UTC" firstStartedPulling="2026-03-09 19:03:16.314152086 +0000 UTC m=+2333.475527952" lastFinishedPulling="2026-03-09 19:03:20.455381739 +0000 UTC m=+2337.616757595" observedRunningTime="2026-03-09 19:03:21.556067172 +0000 UTC m=+2338.717443038" watchObservedRunningTime="2026-03-09 19:03:21.560128303 +0000 UTC m=+2338.721504159" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.010062 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.010569 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="63f25f4d-2a2d-48af-9764-27a0826495b0" containerName="memcached" containerID="cri-o://24eec4726ae7f56a5ad0de69f8279f1cad1361b22a61142c612e765a006ccf53" gracePeriod=30 Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.043126 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.043400 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="218746cc-be8f-46c9-8c0d-c2256ad6b705" containerName="watcher-decision-engine" containerID="cri-o://f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" gracePeriod=30 Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.055848 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.056112 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-kuttl-api-log" containerID="cri-o://7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a" gracePeriod=30 Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.056196 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-api" containerID="cri-o://61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5" gracePeriod=30 Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.068964 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.069176 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="532b607d-3792-4170-9eb9-626dd085ef32" containerName="watcher-applier" containerID="cri-o://d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" gracePeriod=30 Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.197547 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-wcggh"] Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.198882 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.202170 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.202332 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.233643 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-wcggh"] Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.394988 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69wm\" (UniqueName: \"kubernetes.io/projected/fac1295f-5189-4137-8365-42fb46ca2803-kube-api-access-m69wm\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.395037 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-cert-memcached-mtls\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.395189 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-combined-ca-bundle\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.395303 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-config-data\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.395342 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-scripts\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.395360 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-fernet-keys\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.395404 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-credential-keys\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497488 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-fernet-keys\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497562 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-credential-keys\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497593 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m69wm\" (UniqueName: \"kubernetes.io/projected/fac1295f-5189-4137-8365-42fb46ca2803-kube-api-access-m69wm\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497609 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-cert-memcached-mtls\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497658 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-combined-ca-bundle\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-config-data\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.497735 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-scripts\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.503187 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-scripts\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.503546 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-credential-keys\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.505148 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-fernet-keys\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.507693 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-combined-ca-bundle\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.507882 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-config-data\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.518922 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-cert-memcached-mtls\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.519258 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m69wm\" (UniqueName: \"kubernetes.io/projected/fac1295f-5189-4137-8365-42fb46ca2803-kube-api-access-m69wm\") pod \"keystone-bootstrap-wcggh\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.522135 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.555906 4821 generic.go:334] "Generic (PLEG): container finished" podID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerID="7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a" exitCode=143 Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.555948 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f738eca5-2be8-455a-9c85-f2cae97d41e5","Type":"ContainerDied","Data":"7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a"} Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.960148 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.196:9322/\": dial tcp 10.217.0.196:9322: connect: connection refused" Mar 09 19:03:24 crc kubenswrapper[4821]: I0309 19:03:24.960864 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.196:9322/\": dial tcp 10.217.0.196:9322: connect: connection refused" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.010902 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-wcggh"] Mar 09 19:03:25 crc kubenswrapper[4821]: W0309 19:03:25.019857 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfac1295f_5189_4137_8365_42fb46ca2803.slice/crio-5dc6ebf0b29889c05e85d86e7c7e832a1a288c74ce5183c5f79a9b066238be1d WatchSource:0}: Error finding container 5dc6ebf0b29889c05e85d86e7c7e832a1a288c74ce5183c5f79a9b066238be1d: Status 404 returned error can't find the container with id 5dc6ebf0b29889c05e85d86e7c7e832a1a288c74ce5183c5f79a9b066238be1d Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.067823 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.069237 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.071703 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.071775 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="532b607d-3792-4170-9eb9-626dd085ef32" containerName="watcher-applier" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.371619 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524397 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-combined-ca-bundle\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524444 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-custom-prometheus-ca\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524507 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-public-tls-certs\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524540 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-config-data\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524554 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-internal-tls-certs\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524651 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f738eca5-2be8-455a-9c85-f2cae97d41e5-logs\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.524707 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdvqb\" (UniqueName: \"kubernetes.io/projected/f738eca5-2be8-455a-9c85-f2cae97d41e5-kube-api-access-bdvqb\") pod \"f738eca5-2be8-455a-9c85-f2cae97d41e5\" (UID: \"f738eca5-2be8-455a-9c85-f2cae97d41e5\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.530749 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f738eca5-2be8-455a-9c85-f2cae97d41e5-logs" (OuterVolumeSpecName: "logs") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.535175 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f738eca5-2be8-455a-9c85-f2cae97d41e5-kube-api-access-bdvqb" (OuterVolumeSpecName: "kube-api-access-bdvqb") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "kube-api-access-bdvqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.575593 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.576513 4821 generic.go:334] "Generic (PLEG): container finished" podID="63f25f4d-2a2d-48af-9764-27a0826495b0" containerID="24eec4726ae7f56a5ad0de69f8279f1cad1361b22a61142c612e765a006ccf53" exitCode=0 Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.576568 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"63f25f4d-2a2d-48af-9764-27a0826495b0","Type":"ContainerDied","Data":"24eec4726ae7f56a5ad0de69f8279f1cad1361b22a61142c612e765a006ccf53"} Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.576589 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"63f25f4d-2a2d-48af-9764-27a0826495b0","Type":"ContainerDied","Data":"f130b3f49389381e40dba4562632a8a2a998127c54ca91d55a6a3a04dc156108"} Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.576599 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f130b3f49389381e40dba4562632a8a2a998127c54ca91d55a6a3a04dc156108" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.576687 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.589179 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" event={"ID":"fac1295f-5189-4137-8365-42fb46ca2803","Type":"ContainerStarted","Data":"68a4dbfad2b24d0a3afbcf0c2604cf90bb491673543e78df185b34e511252db6"} Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.589219 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" event={"ID":"fac1295f-5189-4137-8365-42fb46ca2803","Type":"ContainerStarted","Data":"5dc6ebf0b29889c05e85d86e7c7e832a1a288c74ce5183c5f79a9b066238be1d"} Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.603257 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.611948 4821 generic.go:334] "Generic (PLEG): container finished" podID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerID="61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5" exitCode=0 Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.612096 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f738eca5-2be8-455a-9c85-f2cae97d41e5","Type":"ContainerDied","Data":"61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5"} Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.612210 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f738eca5-2be8-455a-9c85-f2cae97d41e5","Type":"ContainerDied","Data":"05452d7b27bb12e30afd8953b50197e960812ad38aef35ec75642b8721ac26cd"} Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.612229 4821 scope.go:117] "RemoveContainer" containerID="61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.612455 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.619587 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.626230 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdvqb\" (UniqueName: \"kubernetes.io/projected/f738eca5-2be8-455a-9c85-f2cae97d41e5-kube-api-access-bdvqb\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.626434 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.626501 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.626554 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.626605 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f738eca5-2be8-455a-9c85-f2cae97d41e5-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.628704 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-config-data" (OuterVolumeSpecName: "config-data") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.631295 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f738eca5-2be8-455a-9c85-f2cae97d41e5" (UID: "f738eca5-2be8-455a-9c85-f2cae97d41e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.641984 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" podStartSLOduration=1.6419657669999999 podStartE2EDuration="1.641965767s" podCreationTimestamp="2026-03-09 19:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:25.614512962 +0000 UTC m=+2342.775888818" watchObservedRunningTime="2026-03-09 19:03:25.641965767 +0000 UTC m=+2342.803341623" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.647875 4821 scope.go:117] "RemoveContainer" containerID="7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.663719 4821 scope.go:117] "RemoveContainer" containerID="61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5" Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.664392 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5\": container with ID starting with 61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5 not found: ID does not exist" containerID="61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.664515 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5"} err="failed to get container status \"61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5\": rpc error: code = NotFound desc = could not find container \"61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5\": container with ID starting with 61f08aaf697ccbbb9f3e5dc690589adf4221dd5e42d49a83aecdc8defd4804b5 not found: ID does not exist" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.664610 4821 scope.go:117] "RemoveContainer" containerID="7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a" Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.665098 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a\": container with ID starting with 7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a not found: ID does not exist" containerID="7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.665125 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a"} err="failed to get container status \"7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a\": rpc error: code = NotFound desc = could not find container \"7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a\": container with ID starting with 7b4236b8e98f597693f03d2dd469761df4a418cb397aa653562c2a5d3188d78a not found: ID does not exist" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.727429 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqqbt\" (UniqueName: \"kubernetes.io/projected/63f25f4d-2a2d-48af-9764-27a0826495b0-kube-api-access-rqqbt\") pod \"63f25f4d-2a2d-48af-9764-27a0826495b0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.727476 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-config-data\") pod \"63f25f4d-2a2d-48af-9764-27a0826495b0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.727501 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-kolla-config\") pod \"63f25f4d-2a2d-48af-9764-27a0826495b0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.727549 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-combined-ca-bundle\") pod \"63f25f4d-2a2d-48af-9764-27a0826495b0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.727623 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-memcached-tls-certs\") pod \"63f25f4d-2a2d-48af-9764-27a0826495b0\" (UID: \"63f25f4d-2a2d-48af-9764-27a0826495b0\") " Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.728032 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.728050 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738eca5-2be8-455a-9c85-f2cae97d41e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.728658 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-config-data" (OuterVolumeSpecName: "config-data") pod "63f25f4d-2a2d-48af-9764-27a0826495b0" (UID: "63f25f4d-2a2d-48af-9764-27a0826495b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.729512 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "63f25f4d-2a2d-48af-9764-27a0826495b0" (UID: "63f25f4d-2a2d-48af-9764-27a0826495b0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.731439 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f25f4d-2a2d-48af-9764-27a0826495b0-kube-api-access-rqqbt" (OuterVolumeSpecName: "kube-api-access-rqqbt") pod "63f25f4d-2a2d-48af-9764-27a0826495b0" (UID: "63f25f4d-2a2d-48af-9764-27a0826495b0"). InnerVolumeSpecName "kube-api-access-rqqbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.756881 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63f25f4d-2a2d-48af-9764-27a0826495b0" (UID: "63f25f4d-2a2d-48af-9764-27a0826495b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.778056 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "63f25f4d-2a2d-48af-9764-27a0826495b0" (UID: "63f25f4d-2a2d-48af-9764-27a0826495b0"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.829914 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.829958 4821 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63f25f4d-2a2d-48af-9764-27a0826495b0-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.829975 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqqbt\" (UniqueName: \"kubernetes.io/projected/63f25f4d-2a2d-48af-9764-27a0826495b0-kube-api-access-rqqbt\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.829991 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.830004 4821 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63f25f4d-2a2d-48af-9764-27a0826495b0-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.947267 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.962367 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.970839 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.971167 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f25f4d-2a2d-48af-9764-27a0826495b0" containerName="memcached" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.971182 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f25f4d-2a2d-48af-9764-27a0826495b0" containerName="memcached" Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.971199 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-api" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.971206 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-api" Mar 09 19:03:25 crc kubenswrapper[4821]: E0309 19:03:25.971224 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-kuttl-api-log" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.971230 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-kuttl-api-log" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.971387 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-api" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.971407 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f25f4d-2a2d-48af-9764-27a0826495b0" containerName="memcached" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.971416 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" containerName="watcher-kuttl-api-log" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.972285 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.974776 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.975102 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.977114 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Mar 09 19:03:25 crc kubenswrapper[4821]: I0309 19:03:25.988975 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.134219 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135106 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135256 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135406 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135538 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135657 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nkg\" (UniqueName: \"kubernetes.io/projected/bbbddb2a-58f6-4096-b141-94c344dbc50a-kube-api-access-t6nkg\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135760 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbbddb2a-58f6-4096-b141-94c344dbc50a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.135878 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237692 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237791 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237834 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237883 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237943 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.237981 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nkg\" (UniqueName: \"kubernetes.io/projected/bbbddb2a-58f6-4096-b141-94c344dbc50a-kube-api-access-t6nkg\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.238023 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbbddb2a-58f6-4096-b141-94c344dbc50a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.239659 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbbddb2a-58f6-4096-b141-94c344dbc50a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.243210 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.243289 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.243826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.247056 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.255305 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.257204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nkg\" (UniqueName: \"kubernetes.io/projected/bbbddb2a-58f6-4096-b141-94c344dbc50a-kube-api-access-t6nkg\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.265166 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.295816 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.621568 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.671539 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.688396 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.701088 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.702906 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.704831 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-pcmms" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.705117 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.705613 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.713760 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.768836 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.846731 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0277ce9b-9597-40cc-9339-51cf5dc9d98d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.846809 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277ce9b-9597-40cc-9339-51cf5dc9d98d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.846843 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0277ce9b-9597-40cc-9339-51cf5dc9d98d-config-data\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.846887 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0277ce9b-9597-40cc-9339-51cf5dc9d98d-kolla-config\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.846991 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zdn\" (UniqueName: \"kubernetes.io/projected/0277ce9b-9597-40cc-9339-51cf5dc9d98d-kube-api-access-56zdn\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.948287 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zdn\" (UniqueName: \"kubernetes.io/projected/0277ce9b-9597-40cc-9339-51cf5dc9d98d-kube-api-access-56zdn\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.948367 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0277ce9b-9597-40cc-9339-51cf5dc9d98d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.948420 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277ce9b-9597-40cc-9339-51cf5dc9d98d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.948450 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0277ce9b-9597-40cc-9339-51cf5dc9d98d-config-data\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.948492 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0277ce9b-9597-40cc-9339-51cf5dc9d98d-kolla-config\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.949725 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0277ce9b-9597-40cc-9339-51cf5dc9d98d-kolla-config\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.949926 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0277ce9b-9597-40cc-9339-51cf5dc9d98d-config-data\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.953184 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0277ce9b-9597-40cc-9339-51cf5dc9d98d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.953389 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0277ce9b-9597-40cc-9339-51cf5dc9d98d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:26 crc kubenswrapper[4821]: I0309 19:03:26.965191 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zdn\" (UniqueName: \"kubernetes.io/projected/0277ce9b-9597-40cc-9339-51cf5dc9d98d-kube-api-access-56zdn\") pod \"memcached-0\" (UID: \"0277ce9b-9597-40cc-9339-51cf5dc9d98d\") " pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.023519 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.484875 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.561737 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f25f4d-2a2d-48af-9764-27a0826495b0" path="/var/lib/kubelet/pods/63f25f4d-2a2d-48af-9764-27a0826495b0/volumes" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.562283 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f738eca5-2be8-455a-9c85-f2cae97d41e5" path="/var/lib/kubelet/pods/f738eca5-2be8-455a-9c85-f2cae97d41e5/volumes" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.631716 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bbbddb2a-58f6-4096-b141-94c344dbc50a","Type":"ContainerStarted","Data":"adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66"} Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.631786 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.631803 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bbbddb2a-58f6-4096-b141-94c344dbc50a","Type":"ContainerStarted","Data":"10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08"} Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.631816 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bbbddb2a-58f6-4096-b141-94c344dbc50a","Type":"ContainerStarted","Data":"d68bed5d98553295c0ead780a690e2832fc9ab84372f36160e01d1a29608b258"} Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.633054 4821 generic.go:334] "Generic (PLEG): container finished" podID="532b607d-3792-4170-9eb9-626dd085ef32" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" exitCode=0 Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.633102 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"532b607d-3792-4170-9eb9-626dd085ef32","Type":"ContainerDied","Data":"d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d"} Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.633125 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"532b607d-3792-4170-9eb9-626dd085ef32","Type":"ContainerDied","Data":"9829e1f92ef69a6394d63a88c69e304cdc92c99e78aad1deb005d6c5b1a7b937"} Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.633143 4821 scope.go:117] "RemoveContainer" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.633221 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.657139 4821 scope.go:117] "RemoveContainer" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" Mar 09 19:03:27 crc kubenswrapper[4821]: E0309 19:03:27.657585 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d\": container with ID starting with d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d not found: ID does not exist" containerID="d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.657625 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d"} err="failed to get container status \"d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d\": rpc error: code = NotFound desc = could not find container \"d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d\": container with ID starting with d0fe8d23fd6725d4a6c9454636b1296c147e3f7110f395da18e7df8d840d8c9d not found: ID does not exist" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.659331 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.6593087840000003 podStartE2EDuration="2.659308784s" podCreationTimestamp="2026-03-09 19:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:27.651610006 +0000 UTC m=+2344.812985852" watchObservedRunningTime="2026-03-09 19:03:27.659308784 +0000 UTC m=+2344.820684640" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.660929 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-combined-ca-bundle\") pod \"532b607d-3792-4170-9eb9-626dd085ef32\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.660990 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl8v5\" (UniqueName: \"kubernetes.io/projected/532b607d-3792-4170-9eb9-626dd085ef32-kube-api-access-cl8v5\") pod \"532b607d-3792-4170-9eb9-626dd085ef32\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.661011 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/532b607d-3792-4170-9eb9-626dd085ef32-logs\") pod \"532b607d-3792-4170-9eb9-626dd085ef32\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.661087 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-config-data\") pod \"532b607d-3792-4170-9eb9-626dd085ef32\" (UID: \"532b607d-3792-4170-9eb9-626dd085ef32\") " Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.662632 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532b607d-3792-4170-9eb9-626dd085ef32-logs" (OuterVolumeSpecName: "logs") pod "532b607d-3792-4170-9eb9-626dd085ef32" (UID: "532b607d-3792-4170-9eb9-626dd085ef32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.668424 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532b607d-3792-4170-9eb9-626dd085ef32-kube-api-access-cl8v5" (OuterVolumeSpecName: "kube-api-access-cl8v5") pod "532b607d-3792-4170-9eb9-626dd085ef32" (UID: "532b607d-3792-4170-9eb9-626dd085ef32"). InnerVolumeSpecName "kube-api-access-cl8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.702547 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "532b607d-3792-4170-9eb9-626dd085ef32" (UID: "532b607d-3792-4170-9eb9-626dd085ef32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.713525 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Mar 09 19:03:27 crc kubenswrapper[4821]: W0309 19:03:27.716676 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0277ce9b_9597_40cc_9339_51cf5dc9d98d.slice/crio-11ed0c4907af19ccc57152ef68b83d307fcd2463465588ca7b98988865cde579 WatchSource:0}: Error finding container 11ed0c4907af19ccc57152ef68b83d307fcd2463465588ca7b98988865cde579: Status 404 returned error can't find the container with id 11ed0c4907af19ccc57152ef68b83d307fcd2463465588ca7b98988865cde579 Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.727088 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-config-data" (OuterVolumeSpecName: "config-data") pod "532b607d-3792-4170-9eb9-626dd085ef32" (UID: "532b607d-3792-4170-9eb9-626dd085ef32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.763717 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.763743 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532b607d-3792-4170-9eb9-626dd085ef32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.763774 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl8v5\" (UniqueName: \"kubernetes.io/projected/532b607d-3792-4170-9eb9-626dd085ef32-kube-api-access-cl8v5\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.763798 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/532b607d-3792-4170-9eb9-626dd085ef32-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.966911 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.978463 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.989681 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:27 crc kubenswrapper[4821]: E0309 19:03:27.990051 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532b607d-3792-4170-9eb9-626dd085ef32" containerName="watcher-applier" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.990071 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="532b607d-3792-4170-9eb9-626dd085ef32" containerName="watcher-applier" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.990254 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="532b607d-3792-4170-9eb9-626dd085ef32" containerName="watcher-applier" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.991027 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:27 crc kubenswrapper[4821]: I0309 19:03:27.992851 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.003270 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.068602 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.068729 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765hz\" (UniqueName: \"kubernetes.io/projected/95c56b60-91b1-4c38-add2-fe40d7fa8d90-kube-api-access-765hz\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.068759 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.068805 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c56b60-91b1-4c38-add2-fe40d7fa8d90-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.068827 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.169849 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c56b60-91b1-4c38-add2-fe40d7fa8d90-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.169892 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.169965 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.170006 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-765hz\" (UniqueName: \"kubernetes.io/projected/95c56b60-91b1-4c38-add2-fe40d7fa8d90-kube-api-access-765hz\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.170028 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.170379 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c56b60-91b1-4c38-add2-fe40d7fa8d90-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.173661 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.174040 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.177916 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.188815 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-765hz\" (UniqueName: \"kubernetes.io/projected/95c56b60-91b1-4c38-add2-fe40d7fa8d90-kube-api-access-765hz\") pod \"watcher-kuttl-applier-0\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.308983 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.652878 4821 generic.go:334] "Generic (PLEG): container finished" podID="fac1295f-5189-4137-8365-42fb46ca2803" containerID="68a4dbfad2b24d0a3afbcf0c2604cf90bb491673543e78df185b34e511252db6" exitCode=0 Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.653215 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" event={"ID":"fac1295f-5189-4137-8365-42fb46ca2803","Type":"ContainerDied","Data":"68a4dbfad2b24d0a3afbcf0c2604cf90bb491673543e78df185b34e511252db6"} Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.656371 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"0277ce9b-9597-40cc-9339-51cf5dc9d98d","Type":"ContainerStarted","Data":"073e829cc3bb5e0ed29923cc76ad08477cb45ff6c33339ddec821b3cfa421c02"} Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.656429 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"0277ce9b-9597-40cc-9339-51cf5dc9d98d","Type":"ContainerStarted","Data":"11ed0c4907af19ccc57152ef68b83d307fcd2463465588ca7b98988865cde579"} Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.657160 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.698792 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.698770058 podStartE2EDuration="2.698770058s" podCreationTimestamp="2026-03-09 19:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:28.696156057 +0000 UTC m=+2345.857531923" watchObservedRunningTime="2026-03-09 19:03:28.698770058 +0000 UTC m=+2345.860145914" Mar 09 19:03:28 crc kubenswrapper[4821]: I0309 19:03:28.773652 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:03:29 crc kubenswrapper[4821]: I0309 19:03:29.560949 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532b607d-3792-4170-9eb9-626dd085ef32" path="/var/lib/kubelet/pods/532b607d-3792-4170-9eb9-626dd085ef32/volumes" Mar 09 19:03:29 crc kubenswrapper[4821]: I0309 19:03:29.665535 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"95c56b60-91b1-4c38-add2-fe40d7fa8d90","Type":"ContainerStarted","Data":"87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e"} Mar 09 19:03:29 crc kubenswrapper[4821]: I0309 19:03:29.665596 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"95c56b60-91b1-4c38-add2-fe40d7fa8d90","Type":"ContainerStarted","Data":"b2cf09dce0882ea5b2d4f249a75d83a6fccbd76853004bc7d9944912bbb41ef8"} Mar 09 19:03:29 crc kubenswrapper[4821]: I0309 19:03:29.913626 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:29 crc kubenswrapper[4821]: E0309 19:03:29.948479 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:03:29 crc kubenswrapper[4821]: E0309 19:03:29.954971 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:03:29 crc kubenswrapper[4821]: E0309 19:03:29.956625 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:03:29 crc kubenswrapper[4821]: E0309 19:03:29.956654 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="218746cc-be8f-46c9-8c0d-c2256ad6b705" containerName="watcher-decision-engine" Mar 09 19:03:29 crc kubenswrapper[4821]: I0309 19:03:29.964817 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.964798532 podStartE2EDuration="2.964798532s" podCreationTimestamp="2026-03-09 19:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:29.688846652 +0000 UTC m=+2346.850222508" watchObservedRunningTime="2026-03-09 19:03:29.964798532 +0000 UTC m=+2347.126174388" Mar 09 19:03:29 crc kubenswrapper[4821]: I0309 19:03:29.973604 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106261 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-cert-memcached-mtls\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106413 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m69wm\" (UniqueName: \"kubernetes.io/projected/fac1295f-5189-4137-8365-42fb46ca2803-kube-api-access-m69wm\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106492 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-config-data\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106538 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-scripts\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106557 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-credential-keys\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106606 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-combined-ca-bundle\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.106662 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-fernet-keys\") pod \"fac1295f-5189-4137-8365-42fb46ca2803\" (UID: \"fac1295f-5189-4137-8365-42fb46ca2803\") " Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.111917 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.112023 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac1295f-5189-4137-8365-42fb46ca2803-kube-api-access-m69wm" (OuterVolumeSpecName: "kube-api-access-m69wm") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "kube-api-access-m69wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.112231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-scripts" (OuterVolumeSpecName: "scripts") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.124557 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.130112 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.130953 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-config-data" (OuterVolumeSpecName: "config-data") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.195289 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fac1295f-5189-4137-8365-42fb46ca2803" (UID: "fac1295f-5189-4137-8365-42fb46ca2803"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208793 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208831 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m69wm\" (UniqueName: \"kubernetes.io/projected/fac1295f-5189-4137-8365-42fb46ca2803-kube-api-access-m69wm\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208848 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208859 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208869 4821 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208880 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.208889 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fac1295f-5189-4137-8365-42fb46ca2803-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.675701 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.675694 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-wcggh" event={"ID":"fac1295f-5189-4137-8365-42fb46ca2803","Type":"ContainerDied","Data":"5dc6ebf0b29889c05e85d86e7c7e832a1a288c74ce5183c5f79a9b066238be1d"} Mar 09 19:03:30 crc kubenswrapper[4821]: I0309 19:03:30.675815 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dc6ebf0b29889c05e85d86e7c7e832a1a288c74ce5183c5f79a9b066238be1d" Mar 09 19:03:31 crc kubenswrapper[4821]: I0309 19:03:31.296909 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.025469 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.159903 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-6d45c85556-w6k7b"] Mar 09 19:03:32 crc kubenswrapper[4821]: E0309 19:03:32.160418 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac1295f-5189-4137-8365-42fb46ca2803" containerName="keystone-bootstrap" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.160438 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac1295f-5189-4137-8365-42fb46ca2803" containerName="keystone-bootstrap" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.160672 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac1295f-5189-4137-8365-42fb46ca2803" containerName="keystone-bootstrap" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.161461 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.177570 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-6d45c85556-w6k7b"] Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241395 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-cert-memcached-mtls\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241454 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-config-data\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241484 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-internal-tls-certs\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241504 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvk2\" (UniqueName: \"kubernetes.io/projected/eef0c4bd-2bde-490b-872a-eda5cac560eb-kube-api-access-8fvk2\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241521 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-public-tls-certs\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241630 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-fernet-keys\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241815 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-credential-keys\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241853 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-combined-ca-bundle\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.241986 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-scripts\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.343926 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-credential-keys\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.343971 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-combined-ca-bundle\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344003 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-scripts\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344054 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-cert-memcached-mtls\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344088 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-config-data\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344114 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-internal-tls-certs\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344130 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvk2\" (UniqueName: \"kubernetes.io/projected/eef0c4bd-2bde-490b-872a-eda5cac560eb-kube-api-access-8fvk2\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344148 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-public-tls-certs\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.344179 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-fernet-keys\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.351746 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-public-tls-certs\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.355889 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-cert-memcached-mtls\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.359961 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-config-data\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.363003 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-combined-ca-bundle\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.364847 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-internal-tls-certs\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.367961 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-credential-keys\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.377680 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-scripts\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.379907 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eef0c4bd-2bde-490b-872a-eda5cac560eb-fernet-keys\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.382860 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvk2\" (UniqueName: \"kubernetes.io/projected/eef0c4bd-2bde-490b-872a-eda5cac560eb-kube-api-access-8fvk2\") pod \"keystone-6d45c85556-w6k7b\" (UID: \"eef0c4bd-2bde-490b-872a-eda5cac560eb\") " pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.514761 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:32 crc kubenswrapper[4821]: I0309 19:03:32.984052 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-6d45c85556-w6k7b"] Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.309937 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.436857 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.575955 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-combined-ca-bundle\") pod \"218746cc-be8f-46c9-8c0d-c2256ad6b705\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.576113 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-custom-prometheus-ca\") pod \"218746cc-be8f-46c9-8c0d-c2256ad6b705\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.576175 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/218746cc-be8f-46c9-8c0d-c2256ad6b705-logs\") pod \"218746cc-be8f-46c9-8c0d-c2256ad6b705\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.576460 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/218746cc-be8f-46c9-8c0d-c2256ad6b705-logs" (OuterVolumeSpecName: "logs") pod "218746cc-be8f-46c9-8c0d-c2256ad6b705" (UID: "218746cc-be8f-46c9-8c0d-c2256ad6b705"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.576591 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-config-data\") pod \"218746cc-be8f-46c9-8c0d-c2256ad6b705\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.576870 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55vjp\" (UniqueName: \"kubernetes.io/projected/218746cc-be8f-46c9-8c0d-c2256ad6b705-kube-api-access-55vjp\") pod \"218746cc-be8f-46c9-8c0d-c2256ad6b705\" (UID: \"218746cc-be8f-46c9-8c0d-c2256ad6b705\") " Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.577260 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/218746cc-be8f-46c9-8c0d-c2256ad6b705-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.599754 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218746cc-be8f-46c9-8c0d-c2256ad6b705-kube-api-access-55vjp" (OuterVolumeSpecName: "kube-api-access-55vjp") pod "218746cc-be8f-46c9-8c0d-c2256ad6b705" (UID: "218746cc-be8f-46c9-8c0d-c2256ad6b705"). InnerVolumeSpecName "kube-api-access-55vjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.606992 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "218746cc-be8f-46c9-8c0d-c2256ad6b705" (UID: "218746cc-be8f-46c9-8c0d-c2256ad6b705"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.610488 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "218746cc-be8f-46c9-8c0d-c2256ad6b705" (UID: "218746cc-be8f-46c9-8c0d-c2256ad6b705"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.619501 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-config-data" (OuterVolumeSpecName: "config-data") pod "218746cc-be8f-46c9-8c0d-c2256ad6b705" (UID: "218746cc-be8f-46c9-8c0d-c2256ad6b705"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.680826 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.680858 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.680867 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/218746cc-be8f-46c9-8c0d-c2256ad6b705-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.680875 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55vjp\" (UniqueName: \"kubernetes.io/projected/218746cc-be8f-46c9-8c0d-c2256ad6b705-kube-api-access-55vjp\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.714217 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" event={"ID":"eef0c4bd-2bde-490b-872a-eda5cac560eb","Type":"ContainerStarted","Data":"77bad1b6d8354c5ee7b3fae48d4968cb2d8499b0e182c26e9a77cca237e8bf50"} Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.714266 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" event={"ID":"eef0c4bd-2bde-490b-872a-eda5cac560eb","Type":"ContainerStarted","Data":"d4cce0d86a067f3c705055ae53526cea9626a19bd7315877261f36b3ac8c5f21"} Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.715473 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.716939 4821 generic.go:334] "Generic (PLEG): container finished" podID="218746cc-be8f-46c9-8c0d-c2256ad6b705" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" exitCode=0 Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.716970 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"218746cc-be8f-46c9-8c0d-c2256ad6b705","Type":"ContainerDied","Data":"f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99"} Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.716995 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"218746cc-be8f-46c9-8c0d-c2256ad6b705","Type":"ContainerDied","Data":"9dfc79e37fd8bf9d972fac6927b1f0b2467e237032be1d221daa768e7e3d70d6"} Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.717011 4821 scope.go:117] "RemoveContainer" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.717048 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.739763 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" podStartSLOduration=1.739710637 podStartE2EDuration="1.739710637s" podCreationTimestamp="2026-03-09 19:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:33.734472105 +0000 UTC m=+2350.895847981" watchObservedRunningTime="2026-03-09 19:03:33.739710637 +0000 UTC m=+2350.901086523" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.753544 4821 scope.go:117] "RemoveContainer" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" Mar 09 19:03:33 crc kubenswrapper[4821]: E0309 19:03:33.754067 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99\": container with ID starting with f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99 not found: ID does not exist" containerID="f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.754109 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99"} err="failed to get container status \"f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99\": rpc error: code = NotFound desc = could not find container \"f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99\": container with ID starting with f54ab9c7bf5c668a0722d331e3557917d0d7d2ef0099a9667411a0a5dc912a99 not found: ID does not exist" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.762555 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.778390 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.788505 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:33 crc kubenswrapper[4821]: E0309 19:03:33.788943 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218746cc-be8f-46c9-8c0d-c2256ad6b705" containerName="watcher-decision-engine" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.788962 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="218746cc-be8f-46c9-8c0d-c2256ad6b705" containerName="watcher-decision-engine" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.789118 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="218746cc-be8f-46c9-8c0d-c2256ad6b705" containerName="watcher-decision-engine" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.789694 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.791887 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.794786 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.886362 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70230fb9-ea53-49ee-b54e-b6368951899c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.886458 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.886541 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wslp\" (UniqueName: \"kubernetes.io/projected/70230fb9-ea53-49ee-b54e-b6368951899c-kube-api-access-7wslp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.886586 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.886606 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.886633 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.988466 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.988525 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.988569 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.988606 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70230fb9-ea53-49ee-b54e-b6368951899c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.988689 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.988765 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wslp\" (UniqueName: \"kubernetes.io/projected/70230fb9-ea53-49ee-b54e-b6368951899c-kube-api-access-7wslp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.989473 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70230fb9-ea53-49ee-b54e-b6368951899c-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.993245 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.994310 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.996577 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:33 crc kubenswrapper[4821]: I0309 19:03:33.997608 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:34 crc kubenswrapper[4821]: I0309 19:03:34.007619 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wslp\" (UniqueName: \"kubernetes.io/projected/70230fb9-ea53-49ee-b54e-b6368951899c-kube-api-access-7wslp\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:34 crc kubenswrapper[4821]: I0309 19:03:34.103406 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:34 crc kubenswrapper[4821]: I0309 19:03:34.567106 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:03:34 crc kubenswrapper[4821]: I0309 19:03:34.734684 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"70230fb9-ea53-49ee-b54e-b6368951899c","Type":"ContainerStarted","Data":"0e6315f9b2f359a751cc521a983904e68dd09cb1039720a6ee674da09a31fd77"} Mar 09 19:03:35 crc kubenswrapper[4821]: I0309 19:03:35.577592 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218746cc-be8f-46c9-8c0d-c2256ad6b705" path="/var/lib/kubelet/pods/218746cc-be8f-46c9-8c0d-c2256ad6b705/volumes" Mar 09 19:03:35 crc kubenswrapper[4821]: I0309 19:03:35.747798 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"70230fb9-ea53-49ee-b54e-b6368951899c","Type":"ContainerStarted","Data":"f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447"} Mar 09 19:03:35 crc kubenswrapper[4821]: I0309 19:03:35.771525 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.771479487 podStartE2EDuration="2.771479487s" podCreationTimestamp="2026-03-09 19:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:35.765676958 +0000 UTC m=+2352.927052834" watchObservedRunningTime="2026-03-09 19:03:35.771479487 +0000 UTC m=+2352.932855363" Mar 09 19:03:36 crc kubenswrapper[4821]: I0309 19:03:36.297660 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:36 crc kubenswrapper[4821]: I0309 19:03:36.315173 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:36 crc kubenswrapper[4821]: I0309 19:03:36.768152 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:38 crc kubenswrapper[4821]: I0309 19:03:38.310418 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:38 crc kubenswrapper[4821]: I0309 19:03:38.334064 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:38 crc kubenswrapper[4821]: I0309 19:03:38.810728 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:03:39 crc kubenswrapper[4821]: I0309 19:03:39.562197 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:39 crc kubenswrapper[4821]: I0309 19:03:39.562440 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-kuttl-api-log" containerID="cri-o://10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08" gracePeriod=30 Mar 09 19:03:39 crc kubenswrapper[4821]: I0309 19:03:39.562573 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-api" containerID="cri-o://adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66" gracePeriod=30 Mar 09 19:03:39 crc kubenswrapper[4821]: I0309 19:03:39.791081 4821 generic.go:334] "Generic (PLEG): container finished" podID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerID="10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08" exitCode=143 Mar 09 19:03:39 crc kubenswrapper[4821]: I0309 19:03:39.791181 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bbbddb2a-58f6-4096-b141-94c344dbc50a","Type":"ContainerDied","Data":"10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08"} Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.450780 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.609833 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6nkg\" (UniqueName: \"kubernetes.io/projected/bbbddb2a-58f6-4096-b141-94c344dbc50a-kube-api-access-t6nkg\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.609956 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-custom-prometheus-ca\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.610796 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-cert-memcached-mtls\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.610868 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-config-data\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.610980 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-internal-tls-certs\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.611056 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-combined-ca-bundle\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.611088 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbbddb2a-58f6-4096-b141-94c344dbc50a-logs\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.611111 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-public-tls-certs\") pod \"bbbddb2a-58f6-4096-b141-94c344dbc50a\" (UID: \"bbbddb2a-58f6-4096-b141-94c344dbc50a\") " Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.612011 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbddb2a-58f6-4096-b141-94c344dbc50a-logs" (OuterVolumeSpecName: "logs") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.624574 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbddb2a-58f6-4096-b141-94c344dbc50a-kube-api-access-t6nkg" (OuterVolumeSpecName: "kube-api-access-t6nkg") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "kube-api-access-t6nkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.633314 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.637037 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.663563 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.673531 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-config-data" (OuterVolumeSpecName: "config-data") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.682589 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.697902 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bbbddb2a-58f6-4096-b141-94c344dbc50a" (UID: "bbbddb2a-58f6-4096-b141-94c344dbc50a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712789 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712835 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712852 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712867 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbbddb2a-58f6-4096-b141-94c344dbc50a-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712879 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712890 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6nkg\" (UniqueName: \"kubernetes.io/projected/bbbddb2a-58f6-4096-b141-94c344dbc50a-kube-api-access-t6nkg\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712902 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.712913 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bbbddb2a-58f6-4096-b141-94c344dbc50a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.801613 4821 generic.go:334] "Generic (PLEG): container finished" podID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerID="adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66" exitCode=0 Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.801651 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bbbddb2a-58f6-4096-b141-94c344dbc50a","Type":"ContainerDied","Data":"adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66"} Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.801706 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bbbddb2a-58f6-4096-b141-94c344dbc50a","Type":"ContainerDied","Data":"d68bed5d98553295c0ead780a690e2832fc9ab84372f36160e01d1a29608b258"} Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.801732 4821 scope.go:117] "RemoveContainer" containerID="adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.801735 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.824203 4821 scope.go:117] "RemoveContainer" containerID="10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.837473 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.857538 4821 scope.go:117] "RemoveContainer" containerID="adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66" Mar 09 19:03:40 crc kubenswrapper[4821]: E0309 19:03:40.871018 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66\": container with ID starting with adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66 not found: ID does not exist" containerID="adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.871082 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66"} err="failed to get container status \"adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66\": rpc error: code = NotFound desc = could not find container \"adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66\": container with ID starting with adbb43d5dd40436e7431bbd0cc68b2b28f249dadfa98f6903fe1bbea7672eb66 not found: ID does not exist" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.871112 4821 scope.go:117] "RemoveContainer" containerID="10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08" Mar 09 19:03:40 crc kubenswrapper[4821]: E0309 19:03:40.872080 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08\": container with ID starting with 10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08 not found: ID does not exist" containerID="10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.872100 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08"} err="failed to get container status \"10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08\": rpc error: code = NotFound desc = could not find container \"10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08\": container with ID starting with 10de1f65322e342235c84c094444d0dc199058e5d189fac737bf8227963e3e08 not found: ID does not exist" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.909458 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.917531 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:40 crc kubenswrapper[4821]: E0309 19:03:40.918116 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-kuttl-api-log" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.918138 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-kuttl-api-log" Mar 09 19:03:40 crc kubenswrapper[4821]: E0309 19:03:40.918181 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-api" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.918191 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-api" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.918469 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-kuttl-api-log" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.918488 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" containerName="watcher-api" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.924668 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.927643 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:40 crc kubenswrapper[4821]: I0309 19:03:40.927829 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.017429 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.017784 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.017819 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.017877 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c83651-af16-4dd9-97fc-045c73b48650-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.018113 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.018166 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6n5\" (UniqueName: \"kubernetes.io/projected/c6c83651-af16-4dd9-97fc-045c73b48650-kube-api-access-zg6n5\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.119672 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.119732 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6n5\" (UniqueName: \"kubernetes.io/projected/c6c83651-af16-4dd9-97fc-045c73b48650-kube-api-access-zg6n5\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.119815 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.119865 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.119903 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.119957 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c83651-af16-4dd9-97fc-045c73b48650-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.120533 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c83651-af16-4dd9-97fc-045c73b48650-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.124252 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.124299 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.125064 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.125568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.144860 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6n5\" (UniqueName: \"kubernetes.io/projected/c6c83651-af16-4dd9-97fc-045c73b48650-kube-api-access-zg6n5\") pod \"watcher-kuttl-api-0\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.249270 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.561577 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbddb2a-58f6-4096-b141-94c344dbc50a" path="/var/lib/kubelet/pods/bbbddb2a-58f6-4096-b141-94c344dbc50a/volumes" Mar 09 19:03:41 crc kubenswrapper[4821]: W0309 19:03:41.828155 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c83651_af16_4dd9_97fc_045c73b48650.slice/crio-439eaa303e79c172917b4a94a3f091d6484fd08b6804b5ca1403f6d8de17afab WatchSource:0}: Error finding container 439eaa303e79c172917b4a94a3f091d6484fd08b6804b5ca1403f6d8de17afab: Status 404 returned error can't find the container with id 439eaa303e79c172917b4a94a3f091d6484fd08b6804b5ca1403f6d8de17afab Mar 09 19:03:41 crc kubenswrapper[4821]: I0309 19:03:41.828282 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:03:42 crc kubenswrapper[4821]: I0309 19:03:42.819670 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c6c83651-af16-4dd9-97fc-045c73b48650","Type":"ContainerStarted","Data":"4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea"} Mar 09 19:03:42 crc kubenswrapper[4821]: I0309 19:03:42.819942 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c6c83651-af16-4dd9-97fc-045c73b48650","Type":"ContainerStarted","Data":"f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949"} Mar 09 19:03:42 crc kubenswrapper[4821]: I0309 19:03:42.819953 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c6c83651-af16-4dd9-97fc-045c73b48650","Type":"ContainerStarted","Data":"439eaa303e79c172917b4a94a3f091d6484fd08b6804b5ca1403f6d8de17afab"} Mar 09 19:03:42 crc kubenswrapper[4821]: I0309 19:03:42.820391 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:42 crc kubenswrapper[4821]: I0309 19:03:42.847224 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.847201976 podStartE2EDuration="2.847201976s" podCreationTimestamp="2026-03-09 19:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:03:42.840560166 +0000 UTC m=+2360.001936012" watchObservedRunningTime="2026-03-09 19:03:42.847201976 +0000 UTC m=+2360.008577852" Mar 09 19:03:44 crc kubenswrapper[4821]: I0309 19:03:44.104539 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:44 crc kubenswrapper[4821]: I0309 19:03:44.135564 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:44 crc kubenswrapper[4821]: I0309 19:03:44.835398 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:44 crc kubenswrapper[4821]: I0309 19:03:44.857862 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:03:45 crc kubenswrapper[4821]: I0309 19:03:45.000036 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:45 crc kubenswrapper[4821]: I0309 19:03:45.865103 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:03:46 crc kubenswrapper[4821]: I0309 19:03:46.249729 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:48 crc kubenswrapper[4821]: I0309 19:03:48.272315 4821 scope.go:117] "RemoveContainer" containerID="24eec4726ae7f56a5ad0de69f8279f1cad1361b22a61142c612e765a006ccf53" Mar 09 19:03:51 crc kubenswrapper[4821]: I0309 19:03:51.250285 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:51 crc kubenswrapper[4821]: I0309 19:03:51.254401 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:03:51 crc kubenswrapper[4821]: I0309 19:03:51.898768 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.146682 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551384-5cqh2"] Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.149657 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.152064 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.152218 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.152784 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.159237 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551384-5cqh2"] Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.316806 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p99r7\" (UniqueName: \"kubernetes.io/projected/af908031-ae94-4542-a42f-45e4c17e69ae-kube-api-access-p99r7\") pod \"auto-csr-approver-29551384-5cqh2\" (UID: \"af908031-ae94-4542-a42f-45e4c17e69ae\") " pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.418836 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p99r7\" (UniqueName: \"kubernetes.io/projected/af908031-ae94-4542-a42f-45e4c17e69ae-kube-api-access-p99r7\") pod \"auto-csr-approver-29551384-5cqh2\" (UID: \"af908031-ae94-4542-a42f-45e4c17e69ae\") " pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.445218 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p99r7\" (UniqueName: \"kubernetes.io/projected/af908031-ae94-4542-a42f-45e4c17e69ae-kube-api-access-p99r7\") pod \"auto-csr-approver-29551384-5cqh2\" (UID: \"af908031-ae94-4542-a42f-45e4c17e69ae\") " pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.466116 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.936346 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551384-5cqh2"] Mar 09 19:04:00 crc kubenswrapper[4821]: I0309 19:04:00.990922 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" event={"ID":"af908031-ae94-4542-a42f-45e4c17e69ae","Type":"ContainerStarted","Data":"d78c99d654246aeb01b201860f46b4cdad2339455e32cf0d3a50c2fbc67cb6a9"} Mar 09 19:04:03 crc kubenswrapper[4821]: I0309 19:04:03.009028 4821 generic.go:334] "Generic (PLEG): container finished" podID="af908031-ae94-4542-a42f-45e4c17e69ae" containerID="7db01e802e33cdcaf4c936b120f2050c7ed7d60d7f55dc45397b7fa7fa489cd5" exitCode=0 Mar 09 19:04:03 crc kubenswrapper[4821]: I0309 19:04:03.009117 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" event={"ID":"af908031-ae94-4542-a42f-45e4c17e69ae","Type":"ContainerDied","Data":"7db01e802e33cdcaf4c936b120f2050c7ed7d60d7f55dc45397b7fa7fa489cd5"} Mar 09 19:04:03 crc kubenswrapper[4821]: I0309 19:04:03.981335 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-6d45c85556-w6k7b" Mar 09 19:04:04 crc kubenswrapper[4821]: I0309 19:04:04.057312 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7774c4794c-f24tn"] Mar 09 19:04:04 crc kubenswrapper[4821]: I0309 19:04:04.057550 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" podUID="486686dc-8137-45ed-a509-0f5d3ade5ffb" containerName="keystone-api" containerID="cri-o://8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea" gracePeriod=30 Mar 09 19:04:04 crc kubenswrapper[4821]: I0309 19:04:04.418820 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:04 crc kubenswrapper[4821]: I0309 19:04:04.589401 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p99r7\" (UniqueName: \"kubernetes.io/projected/af908031-ae94-4542-a42f-45e4c17e69ae-kube-api-access-p99r7\") pod \"af908031-ae94-4542-a42f-45e4c17e69ae\" (UID: \"af908031-ae94-4542-a42f-45e4c17e69ae\") " Mar 09 19:04:04 crc kubenswrapper[4821]: I0309 19:04:04.594338 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af908031-ae94-4542-a42f-45e4c17e69ae-kube-api-access-p99r7" (OuterVolumeSpecName: "kube-api-access-p99r7") pod "af908031-ae94-4542-a42f-45e4c17e69ae" (UID: "af908031-ae94-4542-a42f-45e4c17e69ae"). InnerVolumeSpecName "kube-api-access-p99r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:04 crc kubenswrapper[4821]: I0309 19:04:04.690888 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p99r7\" (UniqueName: \"kubernetes.io/projected/af908031-ae94-4542-a42f-45e4c17e69ae-kube-api-access-p99r7\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:05 crc kubenswrapper[4821]: I0309 19:04:05.029867 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" event={"ID":"af908031-ae94-4542-a42f-45e4c17e69ae","Type":"ContainerDied","Data":"d78c99d654246aeb01b201860f46b4cdad2339455e32cf0d3a50c2fbc67cb6a9"} Mar 09 19:04:05 crc kubenswrapper[4821]: I0309 19:04:05.029970 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d78c99d654246aeb01b201860f46b4cdad2339455e32cf0d3a50c2fbc67cb6a9" Mar 09 19:04:05 crc kubenswrapper[4821]: I0309 19:04:05.030074 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551384-5cqh2" Mar 09 19:04:05 crc kubenswrapper[4821]: I0309 19:04:05.516098 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551378-vhjcn"] Mar 09 19:04:05 crc kubenswrapper[4821]: I0309 19:04:05.525262 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551378-vhjcn"] Mar 09 19:04:05 crc kubenswrapper[4821]: I0309 19:04:05.561677 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e460f32-c47b-41a4-a5d6-cb5fa14e77bf" path="/var/lib/kubelet/pods/7e460f32-c47b-41a4-a5d6-cb5fa14e77bf/volumes" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.241638 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" podUID="486686dc-8137-45ed-a509-0f5d3ade5ffb" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.128:5000/v3\": read tcp 10.217.0.2:33930->10.217.0.128:5000: read: connection reset by peer" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.608743 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768186 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-fernet-keys\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768315 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-combined-ca-bundle\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768417 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-config-data\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ssk8\" (UniqueName: \"kubernetes.io/projected/486686dc-8137-45ed-a509-0f5d3ade5ffb-kube-api-access-4ssk8\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768517 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-internal-tls-certs\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768550 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-scripts\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768620 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-credential-keys\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.768666 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-public-tls-certs\") pod \"486686dc-8137-45ed-a509-0f5d3ade5ffb\" (UID: \"486686dc-8137-45ed-a509-0f5d3ade5ffb\") " Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.775644 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486686dc-8137-45ed-a509-0f5d3ade5ffb-kube-api-access-4ssk8" (OuterVolumeSpecName: "kube-api-access-4ssk8") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "kube-api-access-4ssk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.776401 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.781636 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.791336 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-scripts" (OuterVolumeSpecName: "scripts") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.803522 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-config-data" (OuterVolumeSpecName: "config-data") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.805278 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.819448 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.827048 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "486686dc-8137-45ed-a509-0f5d3ade5ffb" (UID: "486686dc-8137-45ed-a509-0f5d3ade5ffb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870166 4821 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870205 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870216 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870225 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ssk8\" (UniqueName: \"kubernetes.io/projected/486686dc-8137-45ed-a509-0f5d3ade5ffb-kube-api-access-4ssk8\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870236 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870245 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870252 4821 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:07 crc kubenswrapper[4821]: I0309 19:04:07.870260 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/486686dc-8137-45ed-a509-0f5d3ade5ffb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.053620 4821 generic.go:334] "Generic (PLEG): container finished" podID="486686dc-8137-45ed-a509-0f5d3ade5ffb" containerID="8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea" exitCode=0 Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.053645 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.053663 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" event={"ID":"486686dc-8137-45ed-a509-0f5d3ade5ffb","Type":"ContainerDied","Data":"8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea"} Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.053691 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7774c4794c-f24tn" event={"ID":"486686dc-8137-45ed-a509-0f5d3ade5ffb","Type":"ContainerDied","Data":"95933fa783de2f9ce4f19f650eb21adcb07b24a0caadab4853535eae2d7653bd"} Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.053708 4821 scope.go:117] "RemoveContainer" containerID="8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea" Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.084418 4821 scope.go:117] "RemoveContainer" containerID="8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea" Mar 09 19:04:08 crc kubenswrapper[4821]: E0309 19:04:08.085205 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea\": container with ID starting with 8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea not found: ID does not exist" containerID="8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea" Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.085301 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea"} err="failed to get container status \"8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea\": rpc error: code = NotFound desc = could not find container \"8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea\": container with ID starting with 8663a3f5abbc1899a5bfbe491dab9a54584c9dc0644ef4982f41b04720217cea not found: ID does not exist" Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.085922 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7774c4794c-f24tn"] Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.094233 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-7774c4794c-f24tn"] Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.302014 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.302334 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-central-agent" containerID="cri-o://49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84" gracePeriod=30 Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.302375 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="sg-core" containerID="cri-o://f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba" gracePeriod=30 Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.302386 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="proxy-httpd" containerID="cri-o://68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802" gracePeriod=30 Mar 09 19:04:08 crc kubenswrapper[4821]: I0309 19:04:08.302443 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-notification-agent" containerID="cri-o://b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133" gracePeriod=30 Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.064895 4821 generic.go:334] "Generic (PLEG): container finished" podID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerID="68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802" exitCode=0 Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.065199 4821 generic.go:334] "Generic (PLEG): container finished" podID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerID="f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba" exitCode=2 Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.065207 4821 generic.go:334] "Generic (PLEG): container finished" podID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerID="49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84" exitCode=0 Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.064962 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerDied","Data":"68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802"} Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.065243 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerDied","Data":"f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba"} Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.065260 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerDied","Data":"49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84"} Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.562512 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486686dc-8137-45ed-a509-0f5d3ade5ffb" path="/var/lib/kubelet/pods/486686dc-8137-45ed-a509-0f5d3ade5ffb/volumes" Mar 09 19:04:09 crc kubenswrapper[4821]: I0309 19:04:09.909008 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014096 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-log-httpd\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014168 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-scripts\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014246 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-ceilometer-tls-certs\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014297 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-config-data\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014379 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-combined-ca-bundle\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014444 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-run-httpd\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014490 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmnmt\" (UniqueName: \"kubernetes.io/projected/fe83642f-cf96-4961-97f5-cfa5c0369567-kube-api-access-rmnmt\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014597 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-sg-core-conf-yaml\") pod \"fe83642f-cf96-4961-97f5-cfa5c0369567\" (UID: \"fe83642f-cf96-4961-97f5-cfa5c0369567\") " Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.014775 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.015142 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.015148 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.022499 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe83642f-cf96-4961-97f5-cfa5c0369567-kube-api-access-rmnmt" (OuterVolumeSpecName: "kube-api-access-rmnmt") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "kube-api-access-rmnmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.034090 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-scripts" (OuterVolumeSpecName: "scripts") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.042178 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.063201 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.075045 4821 generic.go:334] "Generic (PLEG): container finished" podID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerID="b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133" exitCode=0 Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.075097 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerDied","Data":"b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133"} Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.075130 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"fe83642f-cf96-4961-97f5-cfa5c0369567","Type":"ContainerDied","Data":"b8ef6634f73954af9c21ac855f581409b0f5db5fc923de21cb200991618eafee"} Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.075152 4821 scope.go:117] "RemoveContainer" containerID="68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.075296 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.090630 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.116945 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.116986 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe83642f-cf96-4961-97f5-cfa5c0369567-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.116999 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmnmt\" (UniqueName: \"kubernetes.io/projected/fe83642f-cf96-4961-97f5-cfa5c0369567-kube-api-access-rmnmt\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.117012 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.117025 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.117037 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.121024 4821 scope.go:117] "RemoveContainer" containerID="f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.126934 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-config-data" (OuterVolumeSpecName: "config-data") pod "fe83642f-cf96-4961-97f5-cfa5c0369567" (UID: "fe83642f-cf96-4961-97f5-cfa5c0369567"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.136240 4821 scope.go:117] "RemoveContainer" containerID="b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.169191 4821 scope.go:117] "RemoveContainer" containerID="49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.184505 4821 scope.go:117] "RemoveContainer" containerID="68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.185038 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802\": container with ID starting with 68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802 not found: ID does not exist" containerID="68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.185079 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802"} err="failed to get container status \"68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802\": rpc error: code = NotFound desc = could not find container \"68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802\": container with ID starting with 68e886d61366bf2a9c736f5bfb127eeab3c6a74d11cdb572e1d7cf8a8869c802 not found: ID does not exist" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.185120 4821 scope.go:117] "RemoveContainer" containerID="f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.185461 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba\": container with ID starting with f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba not found: ID does not exist" containerID="f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.185487 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba"} err="failed to get container status \"f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba\": rpc error: code = NotFound desc = could not find container \"f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba\": container with ID starting with f399f2802cfc6b77fbb356f46a6f0bedcbfcbcb9b140754bc8795acfff9265ba not found: ID does not exist" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.185500 4821 scope.go:117] "RemoveContainer" containerID="b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.185784 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133\": container with ID starting with b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133 not found: ID does not exist" containerID="b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.185825 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133"} err="failed to get container status \"b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133\": rpc error: code = NotFound desc = could not find container \"b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133\": container with ID starting with b975e1a379eec2150c081cc718515e6ab55b3d741f818078fd4b53e98cb83133 not found: ID does not exist" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.185851 4821 scope.go:117] "RemoveContainer" containerID="49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.186138 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84\": container with ID starting with 49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84 not found: ID does not exist" containerID="49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.186167 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84"} err="failed to get container status \"49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84\": rpc error: code = NotFound desc = could not find container \"49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84\": container with ID starting with 49669753d7f2688227ec8ab04aa492d761bd7ab6e9217c4fd1dd1f50dd058d84 not found: ID does not exist" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.218900 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe83642f-cf96-4961-97f5-cfa5c0369567-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.411678 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.419542 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.434724 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.435083 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-central-agent" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.435106 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-central-agent" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.435119 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-notification-agent" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.435127 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-notification-agent" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.435146 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="sg-core" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.435154 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="sg-core" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.435172 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="proxy-httpd" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.435182 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="proxy-httpd" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.435202 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486686dc-8137-45ed-a509-0f5d3ade5ffb" containerName="keystone-api" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.435210 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="486686dc-8137-45ed-a509-0f5d3ade5ffb" containerName="keystone-api" Mar 09 19:04:10 crc kubenswrapper[4821]: E0309 19:04:10.435223 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af908031-ae94-4542-a42f-45e4c17e69ae" containerName="oc" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.435230 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="af908031-ae94-4542-a42f-45e4c17e69ae" containerName="oc" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.436301 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="486686dc-8137-45ed-a509-0f5d3ade5ffb" containerName="keystone-api" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.436387 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-central-agent" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.436398 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="sg-core" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.436412 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="proxy-httpd" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.436422 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="af908031-ae94-4542-a42f-45e4c17e69ae" containerName="oc" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.436437 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" containerName="ceilometer-notification-agent" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.437855 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.440063 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.440231 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.440305 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.452911 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524350 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524412 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxsmp\" (UniqueName: \"kubernetes.io/projected/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-kube-api-access-lxsmp\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524458 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-log-httpd\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524568 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-scripts\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524598 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-config-data\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524714 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-run-httpd\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.524768 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.626464 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-scripts\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.626530 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.626603 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-config-data\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.626620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-run-httpd\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.626783 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.627260 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.627348 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxsmp\" (UniqueName: \"kubernetes.io/projected/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-kube-api-access-lxsmp\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.627386 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-run-httpd\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.627462 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-log-httpd\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.627911 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-log-httpd\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.631342 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.631406 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-scripts\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.631713 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.633788 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.633867 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-config-data\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.646408 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxsmp\" (UniqueName: \"kubernetes.io/projected/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-kube-api-access-lxsmp\") pod \"ceilometer-0\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:10 crc kubenswrapper[4821]: I0309 19:04:10.761265 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:11 crc kubenswrapper[4821]: I0309 19:04:11.237595 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:11 crc kubenswrapper[4821]: I0309 19:04:11.563027 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe83642f-cf96-4961-97f5-cfa5c0369567" path="/var/lib/kubelet/pods/fe83642f-cf96-4961-97f5-cfa5c0369567/volumes" Mar 09 19:04:12 crc kubenswrapper[4821]: I0309 19:04:12.094948 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerStarted","Data":"7321602a90cb8cd8a24b5e8e77b7870c2e4b6c6d995a5d34483fa95c737454ab"} Mar 09 19:04:12 crc kubenswrapper[4821]: I0309 19:04:12.095362 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerStarted","Data":"f519f811917b0cc20443b277dae12c4e3f5a47c24abaa2ea07b8a26a9013c070"} Mar 09 19:04:13 crc kubenswrapper[4821]: I0309 19:04:13.103932 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerStarted","Data":"016bbbde1c1d80b83b4261414f7cc21b4b065b9cba9f8ec9ef9a054c52e02409"} Mar 09 19:04:14 crc kubenswrapper[4821]: I0309 19:04:14.113274 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerStarted","Data":"6344863c0a252327d94bb6cde2a04edeef5abbf870702b3dd14dffc4abe6c174"} Mar 09 19:04:16 crc kubenswrapper[4821]: I0309 19:04:16.134425 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerStarted","Data":"6c7efd73856c1a6d27ae91f871bad2a5bfd0221df9bffe8a442b9e81289c1b1c"} Mar 09 19:04:16 crc kubenswrapper[4821]: I0309 19:04:16.135178 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:16 crc kubenswrapper[4821]: I0309 19:04:16.169041 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9133669850000001 podStartE2EDuration="6.16901785s" podCreationTimestamp="2026-03-09 19:04:10 +0000 UTC" firstStartedPulling="2026-03-09 19:04:11.228009009 +0000 UTC m=+2388.389384905" lastFinishedPulling="2026-03-09 19:04:15.483659884 +0000 UTC m=+2392.645035770" observedRunningTime="2026-03-09 19:04:16.164950801 +0000 UTC m=+2393.326326667" watchObservedRunningTime="2026-03-09 19:04:16.16901785 +0000 UTC m=+2393.330393706" Mar 09 19:04:40 crc kubenswrapper[4821]: I0309 19:04:40.770681 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.096964 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.107672 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b4xcg"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.127653 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchere081-account-delete-7vgmq"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.128587 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.141394 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchere081-account-delete-7vgmq"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.164508 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.164713 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="95c56b60-91b1-4c38-add2-fe40d7fa8d90" containerName="watcher-applier" containerID="cri-o://87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e" gracePeriod=30 Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.179703 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlzv5\" (UniqueName: \"kubernetes.io/projected/f89395dc-4507-4598-b1f2-491b0fbc23fa-kube-api-access-qlzv5\") pod \"watchere081-account-delete-7vgmq\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.179775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f89395dc-4507-4598-b1f2-491b0fbc23fa-operator-scripts\") pod \"watchere081-account-delete-7vgmq\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.245672 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.245943 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="70230fb9-ea53-49ee-b54e-b6368951899c" containerName="watcher-decision-engine" containerID="cri-o://f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447" gracePeriod=30 Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.270547 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.270836 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-kuttl-api-log" containerID="cri-o://f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949" gracePeriod=30 Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.271275 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-api" containerID="cri-o://4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea" gracePeriod=30 Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.281307 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlzv5\" (UniqueName: \"kubernetes.io/projected/f89395dc-4507-4598-b1f2-491b0fbc23fa-kube-api-access-qlzv5\") pod \"watchere081-account-delete-7vgmq\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.281386 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f89395dc-4507-4598-b1f2-491b0fbc23fa-operator-scripts\") pod \"watchere081-account-delete-7vgmq\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.282614 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f89395dc-4507-4598-b1f2-491b0fbc23fa-operator-scripts\") pod \"watchere081-account-delete-7vgmq\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.323881 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlzv5\" (UniqueName: \"kubernetes.io/projected/f89395dc-4507-4598-b1f2-491b0fbc23fa-kube-api-access-qlzv5\") pod \"watchere081-account-delete-7vgmq\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.478589 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.563944 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9dcd828-4752-4369-8404-2baa9d1d28e1" path="/var/lib/kubelet/pods/f9dcd828-4752-4369-8404-2baa9d1d28e1/volumes" Mar 09 19:04:41 crc kubenswrapper[4821]: I0309 19:04:41.997391 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchere081-account-delete-7vgmq"] Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.237147 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.306777 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c56b60-91b1-4c38-add2-fe40d7fa8d90-logs\") pod \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.306874 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-config-data\") pod \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.306953 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-cert-memcached-mtls\") pod \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.307021 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-combined-ca-bundle\") pod \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.307118 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c56b60-91b1-4c38-add2-fe40d7fa8d90-logs" (OuterVolumeSpecName: "logs") pod "95c56b60-91b1-4c38-add2-fe40d7fa8d90" (UID: "95c56b60-91b1-4c38-add2-fe40d7fa8d90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.307138 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-765hz\" (UniqueName: \"kubernetes.io/projected/95c56b60-91b1-4c38-add2-fe40d7fa8d90-kube-api-access-765hz\") pod \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\" (UID: \"95c56b60-91b1-4c38-add2-fe40d7fa8d90\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.307594 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95c56b60-91b1-4c38-add2-fe40d7fa8d90-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.329566 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c56b60-91b1-4c38-add2-fe40d7fa8d90-kube-api-access-765hz" (OuterVolumeSpecName: "kube-api-access-765hz") pod "95c56b60-91b1-4c38-add2-fe40d7fa8d90" (UID: "95c56b60-91b1-4c38-add2-fe40d7fa8d90"). InnerVolumeSpecName "kube-api-access-765hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.335564 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c56b60-91b1-4c38-add2-fe40d7fa8d90" (UID: "95c56b60-91b1-4c38-add2-fe40d7fa8d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.380706 4821 generic.go:334] "Generic (PLEG): container finished" podID="95c56b60-91b1-4c38-add2-fe40d7fa8d90" containerID="87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e" exitCode=0 Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.382679 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.382840 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"95c56b60-91b1-4c38-add2-fe40d7fa8d90","Type":"ContainerDied","Data":"87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e"} Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.382961 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"95c56b60-91b1-4c38-add2-fe40d7fa8d90","Type":"ContainerDied","Data":"b2cf09dce0882ea5b2d4f249a75d83a6fccbd76853004bc7d9944912bbb41ef8"} Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.383039 4821 scope.go:117] "RemoveContainer" containerID="87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.390009 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6c83651-af16-4dd9-97fc-045c73b48650" containerID="f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949" exitCode=143 Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.390120 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c6c83651-af16-4dd9-97fc-045c73b48650","Type":"ContainerDied","Data":"f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949"} Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.392726 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" event={"ID":"f89395dc-4507-4598-b1f2-491b0fbc23fa","Type":"ContainerStarted","Data":"ef18b3b46f5ab418be43c17ff7f8ec18ef47577005b474faa397d00ccdc63cb4"} Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.392763 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" event={"ID":"f89395dc-4507-4598-b1f2-491b0fbc23fa","Type":"ContainerStarted","Data":"d5b6b7304104c3c10773b03bcd594004da32dd56414468a5c858014b17f6883a"} Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.409702 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.409729 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-765hz\" (UniqueName: \"kubernetes.io/projected/95c56b60-91b1-4c38-add2-fe40d7fa8d90-kube-api-access-765hz\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.426911 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-config-data" (OuterVolumeSpecName: "config-data") pod "95c56b60-91b1-4c38-add2-fe40d7fa8d90" (UID: "95c56b60-91b1-4c38-add2-fe40d7fa8d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.427977 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" podStartSLOduration=1.427952732 podStartE2EDuration="1.427952732s" podCreationTimestamp="2026-03-09 19:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:04:42.41612938 +0000 UTC m=+2419.577505236" watchObservedRunningTime="2026-03-09 19:04:42.427952732 +0000 UTC m=+2419.589328588" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.453433 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "95c56b60-91b1-4c38-add2-fe40d7fa8d90" (UID: "95c56b60-91b1-4c38-add2-fe40d7fa8d90"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.510836 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.510877 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95c56b60-91b1-4c38-add2-fe40d7fa8d90-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.617524 4821 scope.go:117] "RemoveContainer" containerID="87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e" Mar 09 19:04:42 crc kubenswrapper[4821]: E0309 19:04:42.618095 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e\": container with ID starting with 87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e not found: ID does not exist" containerID="87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.618139 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e"} err="failed to get container status \"87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e\": rpc error: code = NotFound desc = could not find container \"87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e\": container with ID starting with 87e00a92e38cd5a9c9e83130e5f5dc161a60686397c3eedefe7772ce5ab5618e not found: ID does not exist" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.725588 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.734973 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.768495 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.814369 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6n5\" (UniqueName: \"kubernetes.io/projected/c6c83651-af16-4dd9-97fc-045c73b48650-kube-api-access-zg6n5\") pod \"c6c83651-af16-4dd9-97fc-045c73b48650\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.814443 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c83651-af16-4dd9-97fc-045c73b48650-logs\") pod \"c6c83651-af16-4dd9-97fc-045c73b48650\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.814513 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-cert-memcached-mtls\") pod \"c6c83651-af16-4dd9-97fc-045c73b48650\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.814560 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-custom-prometheus-ca\") pod \"c6c83651-af16-4dd9-97fc-045c73b48650\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.814602 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-combined-ca-bundle\") pod \"c6c83651-af16-4dd9-97fc-045c73b48650\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.814683 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-config-data\") pod \"c6c83651-af16-4dd9-97fc-045c73b48650\" (UID: \"c6c83651-af16-4dd9-97fc-045c73b48650\") " Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.815731 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c83651-af16-4dd9-97fc-045c73b48650-logs" (OuterVolumeSpecName: "logs") pod "c6c83651-af16-4dd9-97fc-045c73b48650" (UID: "c6c83651-af16-4dd9-97fc-045c73b48650"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.820456 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c83651-af16-4dd9-97fc-045c73b48650-kube-api-access-zg6n5" (OuterVolumeSpecName: "kube-api-access-zg6n5") pod "c6c83651-af16-4dd9-97fc-045c73b48650" (UID: "c6c83651-af16-4dd9-97fc-045c73b48650"). InnerVolumeSpecName "kube-api-access-zg6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.840475 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c83651-af16-4dd9-97fc-045c73b48650" (UID: "c6c83651-af16-4dd9-97fc-045c73b48650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.842390 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c6c83651-af16-4dd9-97fc-045c73b48650" (UID: "c6c83651-af16-4dd9-97fc-045c73b48650"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.860052 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-config-data" (OuterVolumeSpecName: "config-data") pod "c6c83651-af16-4dd9-97fc-045c73b48650" (UID: "c6c83651-af16-4dd9-97fc-045c73b48650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.896343 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c6c83651-af16-4dd9-97fc-045c73b48650" (UID: "c6c83651-af16-4dd9-97fc-045c73b48650"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.916266 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.916296 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.916305 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.916319 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6n5\" (UniqueName: \"kubernetes.io/projected/c6c83651-af16-4dd9-97fc-045c73b48650-kube-api-access-zg6n5\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.916348 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c83651-af16-4dd9-97fc-045c73b48650-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:42 crc kubenswrapper[4821]: I0309 19:04:42.916358 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c6c83651-af16-4dd9-97fc-045c73b48650-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.400765 4821 generic.go:334] "Generic (PLEG): container finished" podID="f89395dc-4507-4598-b1f2-491b0fbc23fa" containerID="ef18b3b46f5ab418be43c17ff7f8ec18ef47577005b474faa397d00ccdc63cb4" exitCode=0 Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.400849 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" event={"ID":"f89395dc-4507-4598-b1f2-491b0fbc23fa","Type":"ContainerDied","Data":"ef18b3b46f5ab418be43c17ff7f8ec18ef47577005b474faa397d00ccdc63cb4"} Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.403966 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6c83651-af16-4dd9-97fc-045c73b48650" containerID="4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea" exitCode=0 Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.404020 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c6c83651-af16-4dd9-97fc-045c73b48650","Type":"ContainerDied","Data":"4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea"} Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.404053 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c6c83651-af16-4dd9-97fc-045c73b48650","Type":"ContainerDied","Data":"439eaa303e79c172917b4a94a3f091d6484fd08b6804b5ca1403f6d8de17afab"} Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.404073 4821 scope.go:117] "RemoveContainer" containerID="4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.404188 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.425494 4821 scope.go:117] "RemoveContainer" containerID="f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.444628 4821 scope.go:117] "RemoveContainer" containerID="4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea" Mar 09 19:04:43 crc kubenswrapper[4821]: E0309 19:04:43.445204 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea\": container with ID starting with 4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea not found: ID does not exist" containerID="4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.445259 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea"} err="failed to get container status \"4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea\": rpc error: code = NotFound desc = could not find container \"4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea\": container with ID starting with 4cc4e1bbf7f9506f68e623a910ca3362fc3eccfa1f59a6216272e42f1f4874ea not found: ID does not exist" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.445280 4821 scope.go:117] "RemoveContainer" containerID="f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949" Mar 09 19:04:43 crc kubenswrapper[4821]: E0309 19:04:43.445799 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949\": container with ID starting with f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949 not found: ID does not exist" containerID="f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.445848 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949"} err="failed to get container status \"f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949\": rpc error: code = NotFound desc = could not find container \"f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949\": container with ID starting with f860501a6ff34e73dc6324d5725d71146a18b1aec72ab6d4a2d5dc14fb586949 not found: ID does not exist" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.451142 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.458407 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.563012 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c56b60-91b1-4c38-add2-fe40d7fa8d90" path="/var/lib/kubelet/pods/95c56b60-91b1-4c38-add2-fe40d7fa8d90/volumes" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.563710 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" path="/var/lib/kubelet/pods/c6c83651-af16-4dd9-97fc-045c73b48650/volumes" Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.734404 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.734716 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-central-agent" containerID="cri-o://7321602a90cb8cd8a24b5e8e77b7870c2e4b6c6d995a5d34483fa95c737454ab" gracePeriod=30 Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.734764 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="sg-core" containerID="cri-o://6344863c0a252327d94bb6cde2a04edeef5abbf870702b3dd14dffc4abe6c174" gracePeriod=30 Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.734791 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-notification-agent" containerID="cri-o://016bbbde1c1d80b83b4261414f7cc21b4b065b9cba9f8ec9ef9a054c52e02409" gracePeriod=30 Mar 09 19:04:43 crc kubenswrapper[4821]: I0309 19:04:43.734764 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="proxy-httpd" containerID="cri-o://6c7efd73856c1a6d27ae91f871bad2a5bfd0221df9bffe8a442b9e81289c1b1c" gracePeriod=30 Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.379654 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422216 4821 generic.go:334] "Generic (PLEG): container finished" podID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerID="6c7efd73856c1a6d27ae91f871bad2a5bfd0221df9bffe8a442b9e81289c1b1c" exitCode=0 Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422251 4821 generic.go:334] "Generic (PLEG): container finished" podID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerID="6344863c0a252327d94bb6cde2a04edeef5abbf870702b3dd14dffc4abe6c174" exitCode=2 Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422259 4821 generic.go:334] "Generic (PLEG): container finished" podID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerID="016bbbde1c1d80b83b4261414f7cc21b4b065b9cba9f8ec9ef9a054c52e02409" exitCode=0 Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422270 4821 generic.go:334] "Generic (PLEG): container finished" podID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerID="7321602a90cb8cd8a24b5e8e77b7870c2e4b6c6d995a5d34483fa95c737454ab" exitCode=0 Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422311 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerDied","Data":"6c7efd73856c1a6d27ae91f871bad2a5bfd0221df9bffe8a442b9e81289c1b1c"} Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422440 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerDied","Data":"6344863c0a252327d94bb6cde2a04edeef5abbf870702b3dd14dffc4abe6c174"} Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422451 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerDied","Data":"016bbbde1c1d80b83b4261414f7cc21b4b065b9cba9f8ec9ef9a054c52e02409"} Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.422462 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerDied","Data":"7321602a90cb8cd8a24b5e8e77b7870c2e4b6c6d995a5d34483fa95c737454ab"} Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.430881 4821 generic.go:334] "Generic (PLEG): container finished" podID="70230fb9-ea53-49ee-b54e-b6368951899c" containerID="f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447" exitCode=0 Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.430952 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.430989 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"70230fb9-ea53-49ee-b54e-b6368951899c","Type":"ContainerDied","Data":"f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447"} Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.431029 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"70230fb9-ea53-49ee-b54e-b6368951899c","Type":"ContainerDied","Data":"0e6315f9b2f359a751cc521a983904e68dd09cb1039720a6ee674da09a31fd77"} Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.431050 4821 scope.go:117] "RemoveContainer" containerID="f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445163 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-custom-prometheus-ca\") pod \"70230fb9-ea53-49ee-b54e-b6368951899c\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445216 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wslp\" (UniqueName: \"kubernetes.io/projected/70230fb9-ea53-49ee-b54e-b6368951899c-kube-api-access-7wslp\") pod \"70230fb9-ea53-49ee-b54e-b6368951899c\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445262 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-combined-ca-bundle\") pod \"70230fb9-ea53-49ee-b54e-b6368951899c\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445355 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70230fb9-ea53-49ee-b54e-b6368951899c-logs\") pod \"70230fb9-ea53-49ee-b54e-b6368951899c\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445397 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-config-data\") pod \"70230fb9-ea53-49ee-b54e-b6368951899c\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445472 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-cert-memcached-mtls\") pod \"70230fb9-ea53-49ee-b54e-b6368951899c\" (UID: \"70230fb9-ea53-49ee-b54e-b6368951899c\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.445822 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70230fb9-ea53-49ee-b54e-b6368951899c-logs" (OuterVolumeSpecName: "logs") pod "70230fb9-ea53-49ee-b54e-b6368951899c" (UID: "70230fb9-ea53-49ee-b54e-b6368951899c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.454397 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70230fb9-ea53-49ee-b54e-b6368951899c-kube-api-access-7wslp" (OuterVolumeSpecName: "kube-api-access-7wslp") pod "70230fb9-ea53-49ee-b54e-b6368951899c" (UID: "70230fb9-ea53-49ee-b54e-b6368951899c"). InnerVolumeSpecName "kube-api-access-7wslp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.505681 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-config-data" (OuterVolumeSpecName: "config-data") pod "70230fb9-ea53-49ee-b54e-b6368951899c" (UID: "70230fb9-ea53-49ee-b54e-b6368951899c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.505947 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70230fb9-ea53-49ee-b54e-b6368951899c" (UID: "70230fb9-ea53-49ee-b54e-b6368951899c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.531483 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "70230fb9-ea53-49ee-b54e-b6368951899c" (UID: "70230fb9-ea53-49ee-b54e-b6368951899c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.540172 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "70230fb9-ea53-49ee-b54e-b6368951899c" (UID: "70230fb9-ea53-49ee-b54e-b6368951899c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.547220 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.547244 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wslp\" (UniqueName: \"kubernetes.io/projected/70230fb9-ea53-49ee-b54e-b6368951899c-kube-api-access-7wslp\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.547254 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.547263 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70230fb9-ea53-49ee-b54e-b6368951899c-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.547272 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.547280 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70230fb9-ea53-49ee-b54e-b6368951899c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.614275 4821 scope.go:117] "RemoveContainer" containerID="f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447" Mar 09 19:04:44 crc kubenswrapper[4821]: E0309 19:04:44.616417 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447\": container with ID starting with f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447 not found: ID does not exist" containerID="f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.616444 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447"} err="failed to get container status \"f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447\": rpc error: code = NotFound desc = could not find container \"f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447\": container with ID starting with f74db4d97e2f9b1b065ed3ac43749cb499cc1d4dd7e387a3b7384d080c8df447 not found: ID does not exist" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.676564 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.749796 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxsmp\" (UniqueName: \"kubernetes.io/projected/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-kube-api-access-lxsmp\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750212 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-config-data\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750261 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-combined-ca-bundle\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750362 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-log-httpd\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750396 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-ceilometer-tls-certs\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750422 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-sg-core-conf-yaml\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-run-httpd\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.750563 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-scripts\") pod \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\" (UID: \"5cd3845e-5d38-42cc-90b1-35f0cd7ff342\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.753932 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.753948 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.755822 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-scripts" (OuterVolumeSpecName: "scripts") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.755846 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-kube-api-access-lxsmp" (OuterVolumeSpecName: "kube-api-access-lxsmp") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "kube-api-access-lxsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.775625 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.786051 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.807734 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.817852 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.831983 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.844383 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852050 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f89395dc-4507-4598-b1f2-491b0fbc23fa-operator-scripts\") pod \"f89395dc-4507-4598-b1f2-491b0fbc23fa\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852118 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlzv5\" (UniqueName: \"kubernetes.io/projected/f89395dc-4507-4598-b1f2-491b0fbc23fa-kube-api-access-qlzv5\") pod \"f89395dc-4507-4598-b1f2-491b0fbc23fa\" (UID: \"f89395dc-4507-4598-b1f2-491b0fbc23fa\") " Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852468 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852487 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852497 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxsmp\" (UniqueName: \"kubernetes.io/projected/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-kube-api-access-lxsmp\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852506 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852517 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852524 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.852532 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.853231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89395dc-4507-4598-b1f2-491b0fbc23fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f89395dc-4507-4598-b1f2-491b0fbc23fa" (UID: "f89395dc-4507-4598-b1f2-491b0fbc23fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.855727 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89395dc-4507-4598-b1f2-491b0fbc23fa-kube-api-access-qlzv5" (OuterVolumeSpecName: "kube-api-access-qlzv5") pod "f89395dc-4507-4598-b1f2-491b0fbc23fa" (UID: "f89395dc-4507-4598-b1f2-491b0fbc23fa"). InnerVolumeSpecName "kube-api-access-qlzv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.886719 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-config-data" (OuterVolumeSpecName: "config-data") pod "5cd3845e-5d38-42cc-90b1-35f0cd7ff342" (UID: "5cd3845e-5d38-42cc-90b1-35f0cd7ff342"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.953822 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f89395dc-4507-4598-b1f2-491b0fbc23fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.953860 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlzv5\" (UniqueName: \"kubernetes.io/projected/f89395dc-4507-4598-b1f2-491b0fbc23fa-kube-api-access-qlzv5\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:44 crc kubenswrapper[4821]: I0309 19:04:44.953872 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd3845e-5d38-42cc-90b1-35f0cd7ff342-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.442007 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" event={"ID":"f89395dc-4507-4598-b1f2-491b0fbc23fa","Type":"ContainerDied","Data":"d5b6b7304104c3c10773b03bcd594004da32dd56414468a5c858014b17f6883a"} Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.442046 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b6b7304104c3c10773b03bcd594004da32dd56414468a5c858014b17f6883a" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.442100 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchere081-account-delete-7vgmq" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.449305 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5cd3845e-5d38-42cc-90b1-35f0cd7ff342","Type":"ContainerDied","Data":"f519f811917b0cc20443b277dae12c4e3f5a47c24abaa2ea07b8a26a9013c070"} Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.449473 4821 scope.go:117] "RemoveContainer" containerID="6c7efd73856c1a6d27ae91f871bad2a5bfd0221df9bffe8a442b9e81289c1b1c" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.449388 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.488822 4821 scope.go:117] "RemoveContainer" containerID="6344863c0a252327d94bb6cde2a04edeef5abbf870702b3dd14dffc4abe6c174" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.555542 4821 scope.go:117] "RemoveContainer" containerID="016bbbde1c1d80b83b4261414f7cc21b4b065b9cba9f8ec9ef9a054c52e02409" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.591236 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70230fb9-ea53-49ee-b54e-b6368951899c" path="/var/lib/kubelet/pods/70230fb9-ea53-49ee-b54e-b6368951899c/volumes" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.591780 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.598259 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.605853 4821 scope.go:117] "RemoveContainer" containerID="7321602a90cb8cd8a24b5e8e77b7870c2e4b6c6d995a5d34483fa95c737454ab" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621124 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621485 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="sg-core" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621496 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="sg-core" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621505 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89395dc-4507-4598-b1f2-491b0fbc23fa" containerName="mariadb-account-delete" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621511 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89395dc-4507-4598-b1f2-491b0fbc23fa" containerName="mariadb-account-delete" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621525 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="proxy-httpd" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621531 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="proxy-httpd" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621551 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-api" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621556 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-api" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621566 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c56b60-91b1-4c38-add2-fe40d7fa8d90" containerName="watcher-applier" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621572 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c56b60-91b1-4c38-add2-fe40d7fa8d90" containerName="watcher-applier" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621580 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-central-agent" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621586 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-central-agent" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621597 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-kuttl-api-log" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621604 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-kuttl-api-log" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621618 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70230fb9-ea53-49ee-b54e-b6368951899c" containerName="watcher-decision-engine" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621624 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="70230fb9-ea53-49ee-b54e-b6368951899c" containerName="watcher-decision-engine" Mar 09 19:04:45 crc kubenswrapper[4821]: E0309 19:04:45.621630 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-notification-agent" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621636 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-notification-agent" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621771 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="sg-core" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621786 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89395dc-4507-4598-b1f2-491b0fbc23fa" containerName="mariadb-account-delete" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621797 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c56b60-91b1-4c38-add2-fe40d7fa8d90" containerName="watcher-applier" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621805 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-notification-agent" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621813 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-api" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621823 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="70230fb9-ea53-49ee-b54e-b6368951899c" containerName="watcher-decision-engine" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621834 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="ceilometer-central-agent" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621842 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c83651-af16-4dd9-97fc-045c73b48650" containerName="watcher-kuttl-api-log" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.621850 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" containerName="proxy-httpd" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.623206 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.636274 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.636954 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.639980 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.653801 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.697782 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-scripts\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.697866 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.697891 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-config-data\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.697994 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.698031 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-log-httpd\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.698057 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgp2\" (UniqueName: \"kubernetes.io/projected/cc1db45c-1436-429e-8254-d136b94af071-kube-api-access-tmgp2\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.698100 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-run-httpd\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.698246 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800145 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800425 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-log-httpd\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800449 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmgp2\" (UniqueName: \"kubernetes.io/projected/cc1db45c-1436-429e-8254-d136b94af071-kube-api-access-tmgp2\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800488 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-run-httpd\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800531 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800605 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-scripts\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800645 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800661 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-config-data\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.800967 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-log-httpd\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.801722 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-run-httpd\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.806396 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.806929 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.807107 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.807411 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-config-data\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.807486 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-scripts\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.831850 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmgp2\" (UniqueName: \"kubernetes.io/projected/cc1db45c-1436-429e-8254-d136b94af071-kube-api-access-tmgp2\") pod \"ceilometer-0\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:45 crc kubenswrapper[4821]: I0309 19:04:45.966642 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.168286 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4jlhz"] Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.182284 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-4jlhz"] Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.193289 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-e081-account-create-update-l9lgx"] Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.201444 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-e081-account-create-update-l9lgx"] Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.208640 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchere081-account-delete-7vgmq"] Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.215495 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchere081-account-delete-7vgmq"] Mar 09 19:04:46 crc kubenswrapper[4821]: I0309 19:04:46.452593 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:04:46 crc kubenswrapper[4821]: W0309 19:04:46.461190 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc1db45c_1436_429e_8254_d136b94af071.slice/crio-068291a1585547daef74c072419ac7f8ca0bd89da37fcea3ce98ea02b1730cb8 WatchSource:0}: Error finding container 068291a1585547daef74c072419ac7f8ca0bd89da37fcea3ce98ea02b1730cb8: Status 404 returned error can't find the container with id 068291a1585547daef74c072419ac7f8ca0bd89da37fcea3ce98ea02b1730cb8 Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.471752 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerStarted","Data":"91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904"} Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.472029 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerStarted","Data":"068291a1585547daef74c072419ac7f8ca0bd89da37fcea3ce98ea02b1730cb8"} Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.564446 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd3845e-5d38-42cc-90b1-35f0cd7ff342" path="/var/lib/kubelet/pods/5cd3845e-5d38-42cc-90b1-35f0cd7ff342/volumes" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.565184 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0" path="/var/lib/kubelet/pods/74e8b160-ed7f-4dd3-8b9c-e9865ab3bff0/volumes" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.565773 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b56daf-1bad-435a-83d1-b7eea7444b00" path="/var/lib/kubelet/pods/90b56daf-1bad-435a-83d1-b7eea7444b00/volumes" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.566746 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89395dc-4507-4598-b1f2-491b0fbc23fa" path="/var/lib/kubelet/pods/f89395dc-4507-4598-b1f2-491b0fbc23fa/volumes" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.858753 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-z7kvb"] Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.859923 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.867562 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6"] Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.868545 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.873453 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.877137 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z7kvb"] Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.884821 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6"] Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.940223 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-operator-scripts\") pod \"watcher-ad92-account-create-update-hzsg6\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.940358 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b8efea-c318-4f79-8727-2955f5815eda-operator-scripts\") pod \"watcher-db-create-z7kvb\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.940462 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbz2f\" (UniqueName: \"kubernetes.io/projected/77b8efea-c318-4f79-8727-2955f5815eda-kube-api-access-pbz2f\") pod \"watcher-db-create-z7kvb\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:47 crc kubenswrapper[4821]: I0309 19:04:47.940529 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p58\" (UniqueName: \"kubernetes.io/projected/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-kube-api-access-m7p58\") pod \"watcher-ad92-account-create-update-hzsg6\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.041218 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbz2f\" (UniqueName: \"kubernetes.io/projected/77b8efea-c318-4f79-8727-2955f5815eda-kube-api-access-pbz2f\") pod \"watcher-db-create-z7kvb\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.041279 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p58\" (UniqueName: \"kubernetes.io/projected/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-kube-api-access-m7p58\") pod \"watcher-ad92-account-create-update-hzsg6\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.041301 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-operator-scripts\") pod \"watcher-ad92-account-create-update-hzsg6\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.041360 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b8efea-c318-4f79-8727-2955f5815eda-operator-scripts\") pod \"watcher-db-create-z7kvb\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.042144 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b8efea-c318-4f79-8727-2955f5815eda-operator-scripts\") pod \"watcher-db-create-z7kvb\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.042460 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-operator-scripts\") pod \"watcher-ad92-account-create-update-hzsg6\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.059056 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbz2f\" (UniqueName: \"kubernetes.io/projected/77b8efea-c318-4f79-8727-2955f5815eda-kube-api-access-pbz2f\") pod \"watcher-db-create-z7kvb\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.064793 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p58\" (UniqueName: \"kubernetes.io/projected/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-kube-api-access-m7p58\") pod \"watcher-ad92-account-create-update-hzsg6\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.183572 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.190826 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.400777 4821 scope.go:117] "RemoveContainer" containerID="b02a02eb436b674bd00ea22ae9e3359d4dde69c8264e004a5c80feab8339b097" Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.487213 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerStarted","Data":"175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177"} Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.675885 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z7kvb"] Mar 09 19:04:48 crc kubenswrapper[4821]: W0309 19:04:48.781385 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode506d5ec_c6e6_4f26_ac22_9f545ca55ff9.slice/crio-6c6fd4ce0d14635b1ff14cb89289383713b3f0c66af138e53423002bfd063d46 WatchSource:0}: Error finding container 6c6fd4ce0d14635b1ff14cb89289383713b3f0c66af138e53423002bfd063d46: Status 404 returned error can't find the container with id 6c6fd4ce0d14635b1ff14cb89289383713b3f0c66af138e53423002bfd063d46 Mar 09 19:04:48 crc kubenswrapper[4821]: I0309 19:04:48.789349 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6"] Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.502867 4821 generic.go:334] "Generic (PLEG): container finished" podID="e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" containerID="13483fcad99a15a4ea27ec93ba716664ff2729db9b3832c1f9a9c1870446aeab" exitCode=0 Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.502961 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" event={"ID":"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9","Type":"ContainerDied","Data":"13483fcad99a15a4ea27ec93ba716664ff2729db9b3832c1f9a9c1870446aeab"} Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.503008 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" event={"ID":"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9","Type":"ContainerStarted","Data":"6c6fd4ce0d14635b1ff14cb89289383713b3f0c66af138e53423002bfd063d46"} Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.505933 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerStarted","Data":"01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3"} Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.507537 4821 generic.go:334] "Generic (PLEG): container finished" podID="77b8efea-c318-4f79-8727-2955f5815eda" containerID="89faf9c51503378749e26bf789fbd7cbd6104d2506003131485bf22c6520d940" exitCode=0 Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.507587 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-z7kvb" event={"ID":"77b8efea-c318-4f79-8727-2955f5815eda","Type":"ContainerDied","Data":"89faf9c51503378749e26bf789fbd7cbd6104d2506003131485bf22c6520d940"} Mar 09 19:04:49 crc kubenswrapper[4821]: I0309 19:04:49.507613 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-z7kvb" event={"ID":"77b8efea-c318-4f79-8727-2955f5815eda","Type":"ContainerStarted","Data":"dd96933e18ea2641d3b18ce4ec4f7d9c07db9c3a33a93facb86c402ac641a0e5"} Mar 09 19:04:50 crc kubenswrapper[4821]: I0309 19:04:50.517938 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerStarted","Data":"8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae"} Mar 09 19:04:50 crc kubenswrapper[4821]: I0309 19:04:50.562479 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7941140309999999 podStartE2EDuration="5.562462088s" podCreationTimestamp="2026-03-09 19:04:45 +0000 UTC" firstStartedPulling="2026-03-09 19:04:46.467755926 +0000 UTC m=+2423.629131782" lastFinishedPulling="2026-03-09 19:04:50.236103973 +0000 UTC m=+2427.397479839" observedRunningTime="2026-03-09 19:04:50.556920228 +0000 UTC m=+2427.718296094" watchObservedRunningTime="2026-03-09 19:04:50.562462088 +0000 UTC m=+2427.723837964" Mar 09 19:04:50 crc kubenswrapper[4821]: I0309 19:04:50.880088 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:50 crc kubenswrapper[4821]: I0309 19:04:50.939365 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.092008 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbz2f\" (UniqueName: \"kubernetes.io/projected/77b8efea-c318-4f79-8727-2955f5815eda-kube-api-access-pbz2f\") pod \"77b8efea-c318-4f79-8727-2955f5815eda\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.092060 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b8efea-c318-4f79-8727-2955f5815eda-operator-scripts\") pod \"77b8efea-c318-4f79-8727-2955f5815eda\" (UID: \"77b8efea-c318-4f79-8727-2955f5815eda\") " Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.092076 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7p58\" (UniqueName: \"kubernetes.io/projected/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-kube-api-access-m7p58\") pod \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.092159 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-operator-scripts\") pod \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\" (UID: \"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9\") " Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.092844 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b8efea-c318-4f79-8727-2955f5815eda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77b8efea-c318-4f79-8727-2955f5815eda" (UID: "77b8efea-c318-4f79-8727-2955f5815eda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.092883 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" (UID: "e506d5ec-c6e6-4f26-ac22-9f545ca55ff9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.096760 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b8efea-c318-4f79-8727-2955f5815eda-kube-api-access-pbz2f" (OuterVolumeSpecName: "kube-api-access-pbz2f") pod "77b8efea-c318-4f79-8727-2955f5815eda" (UID: "77b8efea-c318-4f79-8727-2955f5815eda"). InnerVolumeSpecName "kube-api-access-pbz2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.101666 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-kube-api-access-m7p58" (OuterVolumeSpecName: "kube-api-access-m7p58") pod "e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" (UID: "e506d5ec-c6e6-4f26-ac22-9f545ca55ff9"). InnerVolumeSpecName "kube-api-access-m7p58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.194356 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.194384 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbz2f\" (UniqueName: \"kubernetes.io/projected/77b8efea-c318-4f79-8727-2955f5815eda-kube-api-access-pbz2f\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.194395 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b8efea-c318-4f79-8727-2955f5815eda-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.194403 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7p58\" (UniqueName: \"kubernetes.io/projected/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9-kube-api-access-m7p58\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.526657 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" event={"ID":"e506d5ec-c6e6-4f26-ac22-9f545ca55ff9","Type":"ContainerDied","Data":"6c6fd4ce0d14635b1ff14cb89289383713b3f0c66af138e53423002bfd063d46"} Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.526692 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c6fd4ce0d14635b1ff14cb89289383713b3f0c66af138e53423002bfd063d46" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.526709 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.528458 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-z7kvb" event={"ID":"77b8efea-c318-4f79-8727-2955f5815eda","Type":"ContainerDied","Data":"dd96933e18ea2641d3b18ce4ec4f7d9c07db9c3a33a93facb86c402ac641a0e5"} Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.528486 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd96933e18ea2641d3b18ce4ec4f7d9c07db9c3a33a93facb86c402ac641a0e5" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.528504 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-z7kvb" Mar 09 19:04:51 crc kubenswrapper[4821]: I0309 19:04:51.528808 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.449882 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-62qlx"] Mar 09 19:04:53 crc kubenswrapper[4821]: E0309 19:04:53.450569 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b8efea-c318-4f79-8727-2955f5815eda" containerName="mariadb-database-create" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.450587 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b8efea-c318-4f79-8727-2955f5815eda" containerName="mariadb-database-create" Mar 09 19:04:53 crc kubenswrapper[4821]: E0309 19:04:53.450624 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" containerName="mariadb-account-create-update" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.450633 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" containerName="mariadb-account-create-update" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.450844 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b8efea-c318-4f79-8727-2955f5815eda" containerName="mariadb-database-create" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.450861 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" containerName="mariadb-account-create-update" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.451707 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.455365 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.455594 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6qssp" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.475641 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-62qlx"] Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.646880 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-config-data\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.646943 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.647155 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnl5\" (UniqueName: \"kubernetes.io/projected/70261a59-d8c1-4cda-abbd-964027faef2e-kube-api-access-cwnl5\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.647191 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-db-sync-config-data\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.747994 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwnl5\" (UniqueName: \"kubernetes.io/projected/70261a59-d8c1-4cda-abbd-964027faef2e-kube-api-access-cwnl5\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.748314 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-db-sync-config-data\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.748526 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-config-data\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.748625 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.752966 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.756913 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-config-data\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.758568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-db-sync-config-data\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.763168 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwnl5\" (UniqueName: \"kubernetes.io/projected/70261a59-d8c1-4cda-abbd-964027faef2e-kube-api-access-cwnl5\") pod \"watcher-kuttl-db-sync-62qlx\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:53 crc kubenswrapper[4821]: I0309 19:04:53.789334 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:54 crc kubenswrapper[4821]: I0309 19:04:54.295998 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-62qlx"] Mar 09 19:04:54 crc kubenswrapper[4821]: W0309 19:04:54.304457 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70261a59_d8c1_4cda_abbd_964027faef2e.slice/crio-4646e567bbfeaf879ee803a6ed3198666f690051d147059807caba2d1a0cf98c WatchSource:0}: Error finding container 4646e567bbfeaf879ee803a6ed3198666f690051d147059807caba2d1a0cf98c: Status 404 returned error can't find the container with id 4646e567bbfeaf879ee803a6ed3198666f690051d147059807caba2d1a0cf98c Mar 09 19:04:54 crc kubenswrapper[4821]: I0309 19:04:54.551659 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" event={"ID":"70261a59-d8c1-4cda-abbd-964027faef2e","Type":"ContainerStarted","Data":"3c4837eaf76e6a0333da9b6862498a1f1f48426a07ed7c1c19e08db7508665ab"} Mar 09 19:04:54 crc kubenswrapper[4821]: I0309 19:04:54.551903 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" event={"ID":"70261a59-d8c1-4cda-abbd-964027faef2e","Type":"ContainerStarted","Data":"4646e567bbfeaf879ee803a6ed3198666f690051d147059807caba2d1a0cf98c"} Mar 09 19:04:54 crc kubenswrapper[4821]: I0309 19:04:54.576236 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" podStartSLOduration=1.576213717 podStartE2EDuration="1.576213717s" podCreationTimestamp="2026-03-09 19:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:04:54.568436626 +0000 UTC m=+2431.729812512" watchObservedRunningTime="2026-03-09 19:04:54.576213717 +0000 UTC m=+2431.737589573" Mar 09 19:04:57 crc kubenswrapper[4821]: I0309 19:04:57.586734 4821 generic.go:334] "Generic (PLEG): container finished" podID="70261a59-d8c1-4cda-abbd-964027faef2e" containerID="3c4837eaf76e6a0333da9b6862498a1f1f48426a07ed7c1c19e08db7508665ab" exitCode=0 Mar 09 19:04:57 crc kubenswrapper[4821]: I0309 19:04:57.586817 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" event={"ID":"70261a59-d8c1-4cda-abbd-964027faef2e","Type":"ContainerDied","Data":"3c4837eaf76e6a0333da9b6862498a1f1f48426a07ed7c1c19e08db7508665ab"} Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.017681 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.137676 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-config-data\") pod \"70261a59-d8c1-4cda-abbd-964027faef2e\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.137755 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-combined-ca-bundle\") pod \"70261a59-d8c1-4cda-abbd-964027faef2e\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.137784 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwnl5\" (UniqueName: \"kubernetes.io/projected/70261a59-d8c1-4cda-abbd-964027faef2e-kube-api-access-cwnl5\") pod \"70261a59-d8c1-4cda-abbd-964027faef2e\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.137822 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-db-sync-config-data\") pod \"70261a59-d8c1-4cda-abbd-964027faef2e\" (UID: \"70261a59-d8c1-4cda-abbd-964027faef2e\") " Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.142936 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "70261a59-d8c1-4cda-abbd-964027faef2e" (UID: "70261a59-d8c1-4cda-abbd-964027faef2e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.152682 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70261a59-d8c1-4cda-abbd-964027faef2e-kube-api-access-cwnl5" (OuterVolumeSpecName: "kube-api-access-cwnl5") pod "70261a59-d8c1-4cda-abbd-964027faef2e" (UID: "70261a59-d8c1-4cda-abbd-964027faef2e"). InnerVolumeSpecName "kube-api-access-cwnl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.159455 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70261a59-d8c1-4cda-abbd-964027faef2e" (UID: "70261a59-d8c1-4cda-abbd-964027faef2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.178084 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-config-data" (OuterVolumeSpecName: "config-data") pod "70261a59-d8c1-4cda-abbd-964027faef2e" (UID: "70261a59-d8c1-4cda-abbd-964027faef2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.239861 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.239900 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.239915 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwnl5\" (UniqueName: \"kubernetes.io/projected/70261a59-d8c1-4cda-abbd-964027faef2e-kube-api-access-cwnl5\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.239929 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/70261a59-d8c1-4cda-abbd-964027faef2e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.605284 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" event={"ID":"70261a59-d8c1-4cda-abbd-964027faef2e","Type":"ContainerDied","Data":"4646e567bbfeaf879ee803a6ed3198666f690051d147059807caba2d1a0cf98c"} Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.605346 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4646e567bbfeaf879ee803a6ed3198666f690051d147059807caba2d1a0cf98c" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.605374 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-62qlx" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.955595 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:04:59 crc kubenswrapper[4821]: E0309 19:04:59.956150 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70261a59-d8c1-4cda-abbd-964027faef2e" containerName="watcher-kuttl-db-sync" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.956165 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="70261a59-d8c1-4cda-abbd-964027faef2e" containerName="watcher-kuttl-db-sync" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.956353 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="70261a59-d8c1-4cda-abbd-964027faef2e" containerName="watcher-kuttl-db-sync" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.956907 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.958601 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.958731 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6qssp" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.963677 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.965206 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.970115 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.978011 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:04:59 crc kubenswrapper[4821]: I0309 19:04:59.985713 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.017496 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.041062 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.041164 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.044025 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073659 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073698 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073717 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjccs\" (UniqueName: \"kubernetes.io/projected/90305fc1-6810-430e-aa99-60dd72f96f3b-kube-api-access-zjccs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073738 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073753 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073770 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073786 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90305fc1-6810-430e-aa99-60dd72f96f3b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073827 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073860 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073914 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073944 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8z8w\" (UniqueName: \"kubernetes.io/projected/d857c38e-6ba2-425a-ac01-2bd92e13589c-kube-api-access-q8z8w\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.073978 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.074002 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.074019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8krd\" (UniqueName: \"kubernetes.io/projected/b220d47b-754f-44af-96d9-022e10d04eca-kube-api-access-r8krd\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.074045 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b220d47b-754f-44af-96d9-022e10d04eca-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.074070 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857c38e-6ba2-425a-ac01-2bd92e13589c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.074088 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175556 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857c38e-6ba2-425a-ac01-2bd92e13589c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175618 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175672 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175698 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175720 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjccs\" (UniqueName: \"kubernetes.io/projected/90305fc1-6810-430e-aa99-60dd72f96f3b-kube-api-access-zjccs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175745 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175766 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175788 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175808 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90305fc1-6810-430e-aa99-60dd72f96f3b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175848 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175876 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175909 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175922 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857c38e-6ba2-425a-ac01-2bd92e13589c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175940 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8z8w\" (UniqueName: \"kubernetes.io/projected/d857c38e-6ba2-425a-ac01-2bd92e13589c-kube-api-access-q8z8w\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.175977 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.176006 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.176029 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8krd\" (UniqueName: \"kubernetes.io/projected/b220d47b-754f-44af-96d9-022e10d04eca-kube-api-access-r8krd\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.176065 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b220d47b-754f-44af-96d9-022e10d04eca-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.176442 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b220d47b-754f-44af-96d9-022e10d04eca-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.176769 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90305fc1-6810-430e-aa99-60dd72f96f3b-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.179860 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.179946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.180540 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.191958 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8z8w\" (UniqueName: \"kubernetes.io/projected/d857c38e-6ba2-425a-ac01-2bd92e13589c-kube-api-access-q8z8w\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.191960 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8krd\" (UniqueName: \"kubernetes.io/projected/b220d47b-754f-44af-96d9-022e10d04eca-kube-api-access-r8krd\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.192500 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.192691 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.192829 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.192945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.193350 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.193780 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.194560 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.195154 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.202730 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjccs\" (UniqueName: \"kubernetes.io/projected/90305fc1-6810-430e-aa99-60dd72f96f3b-kube-api-access-zjccs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.288434 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.308670 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.366245 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.754898 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:00 crc kubenswrapper[4821]: W0309 19:05:00.758019 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb220d47b_754f_44af_96d9_022e10d04eca.slice/crio-62935e07bb02616c3398eab5701aaa26afb70e0881a44d26e7cbfc26b8245096 WatchSource:0}: Error finding container 62935e07bb02616c3398eab5701aaa26afb70e0881a44d26e7cbfc26b8245096: Status 404 returned error can't find the container with id 62935e07bb02616c3398eab5701aaa26afb70e0881a44d26e7cbfc26b8245096 Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.842013 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:00 crc kubenswrapper[4821]: W0309 19:05:00.856288 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90305fc1_6810_430e_aa99_60dd72f96f3b.slice/crio-78424e0fa573c89fd7368d429f82c0cb53034b8418cd64f2534d5fb71d4cdb47 WatchSource:0}: Error finding container 78424e0fa573c89fd7368d429f82c0cb53034b8418cd64f2534d5fb71d4cdb47: Status 404 returned error can't find the container with id 78424e0fa573c89fd7368d429f82c0cb53034b8418cd64f2534d5fb71d4cdb47 Mar 09 19:05:00 crc kubenswrapper[4821]: I0309 19:05:00.857728 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:00 crc kubenswrapper[4821]: W0309 19:05:00.873042 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd857c38e_6ba2_425a_ac01_2bd92e13589c.slice/crio-737392709ea5122b968664fb432bef9935fd8054567438348779549225acdb2d WatchSource:0}: Error finding container 737392709ea5122b968664fb432bef9935fd8054567438348779549225acdb2d: Status 404 returned error can't find the container with id 737392709ea5122b968664fb432bef9935fd8054567438348779549225acdb2d Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.621521 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"90305fc1-6810-430e-aa99-60dd72f96f3b","Type":"ContainerStarted","Data":"663196d5dc08bc94bb0d0f17c704c6eeda1b57a2347b9e74e26f9385f9b61580"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.621952 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"90305fc1-6810-430e-aa99-60dd72f96f3b","Type":"ContainerStarted","Data":"78424e0fa573c89fd7368d429f82c0cb53034b8418cd64f2534d5fb71d4cdb47"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.642742 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d857c38e-6ba2-425a-ac01-2bd92e13589c","Type":"ContainerStarted","Data":"70db1049a58f1211ef5f9a404949b007ae1c5193950c96b0e4e9625834c433dc"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.642833 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d857c38e-6ba2-425a-ac01-2bd92e13589c","Type":"ContainerStarted","Data":"0670ad2bdd2d97138be84b74e6e0fd2dc503be14bc1e162532d58b10fd358006"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.642856 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d857c38e-6ba2-425a-ac01-2bd92e13589c","Type":"ContainerStarted","Data":"737392709ea5122b968664fb432bef9935fd8054567438348779549225acdb2d"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.650737 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b220d47b-754f-44af-96d9-022e10d04eca","Type":"ContainerStarted","Data":"1d1b0bbbb348b5632bf9702642b54e7f54f97e8538f2bcc4e67c4f742df4092b"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.650807 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b220d47b-754f-44af-96d9-022e10d04eca","Type":"ContainerStarted","Data":"62935e07bb02616c3398eab5701aaa26afb70e0881a44d26e7cbfc26b8245096"} Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.651239 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.661856 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.661832455 podStartE2EDuration="2.661832455s" podCreationTimestamp="2026-03-09 19:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:05:01.660581991 +0000 UTC m=+2438.821957857" watchObservedRunningTime="2026-03-09 19:05:01.661832455 +0000 UTC m=+2438.823208311" Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.695728 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.695696204 podStartE2EDuration="2.695696204s" podCreationTimestamp="2026-03-09 19:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:05:01.680499861 +0000 UTC m=+2438.841875717" watchObservedRunningTime="2026-03-09 19:05:01.695696204 +0000 UTC m=+2438.857072060" Mar 09 19:05:01 crc kubenswrapper[4821]: I0309 19:05:01.713086 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.713058804 podStartE2EDuration="2.713058804s" podCreationTimestamp="2026-03-09 19:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:05:01.704174873 +0000 UTC m=+2438.865550739" watchObservedRunningTime="2026-03-09 19:05:01.713058804 +0000 UTC m=+2438.874434710" Mar 09 19:05:04 crc kubenswrapper[4821]: I0309 19:05:04.003784 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:05 crc kubenswrapper[4821]: I0309 19:05:05.289415 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:05 crc kubenswrapper[4821]: I0309 19:05:05.309370 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.289588 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.310134 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.322076 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.336787 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.367304 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.404090 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.720479 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.733992 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.748927 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:10 crc kubenswrapper[4821]: I0309 19:05:10.751449 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:12 crc kubenswrapper[4821]: I0309 19:05:12.926445 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:12 crc kubenswrapper[4821]: I0309 19:05:12.927072 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-central-agent" containerID="cri-o://91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904" gracePeriod=30 Mar 09 19:05:12 crc kubenswrapper[4821]: I0309 19:05:12.927375 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="proxy-httpd" containerID="cri-o://8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae" gracePeriod=30 Mar 09 19:05:12 crc kubenswrapper[4821]: I0309 19:05:12.927438 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-notification-agent" containerID="cri-o://175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177" gracePeriod=30 Mar 09 19:05:12 crc kubenswrapper[4821]: I0309 19:05:12.927467 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="sg-core" containerID="cri-o://01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3" gracePeriod=30 Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.028097 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:3000/\": read tcp 10.217.0.2:58200->10.217.0.209:3000: read: connection reset by peer" Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.746059 4821 generic.go:334] "Generic (PLEG): container finished" podID="cc1db45c-1436-429e-8254-d136b94af071" containerID="8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae" exitCode=0 Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.746100 4821 generic.go:334] "Generic (PLEG): container finished" podID="cc1db45c-1436-429e-8254-d136b94af071" containerID="01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3" exitCode=2 Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.746114 4821 generic.go:334] "Generic (PLEG): container finished" podID="cc1db45c-1436-429e-8254-d136b94af071" containerID="91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904" exitCode=0 Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.746135 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerDied","Data":"8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae"} Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.746164 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerDied","Data":"01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3"} Mar 09 19:05:13 crc kubenswrapper[4821]: I0309 19:05:13.746175 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerDied","Data":"91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904"} Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.754293 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.769262 4821 generic.go:334] "Generic (PLEG): container finished" podID="cc1db45c-1436-429e-8254-d136b94af071" containerID="175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177" exitCode=0 Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.769304 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerDied","Data":"175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177"} Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.769344 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cc1db45c-1436-429e-8254-d136b94af071","Type":"ContainerDied","Data":"068291a1585547daef74c072419ac7f8ca0bd89da37fcea3ce98ea02b1730cb8"} Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.769363 4821 scope.go:117] "RemoveContainer" containerID="8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.769390 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.834610 4821 scope.go:117] "RemoveContainer" containerID="01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.845950 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-combined-ca-bundle\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.846492 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-scripts\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.846658 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-ceilometer-tls-certs\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.846846 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-log-httpd\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.846998 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-run-httpd\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.847144 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-sg-core-conf-yaml\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.847278 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-config-data\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.847438 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmgp2\" (UniqueName: \"kubernetes.io/projected/cc1db45c-1436-429e-8254-d136b94af071-kube-api-access-tmgp2\") pod \"cc1db45c-1436-429e-8254-d136b94af071\" (UID: \"cc1db45c-1436-429e-8254-d136b94af071\") " Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.847480 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.847620 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.848143 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.848246 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc1db45c-1436-429e-8254-d136b94af071-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.867684 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-scripts" (OuterVolumeSpecName: "scripts") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.867698 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1db45c-1436-429e-8254-d136b94af071-kube-api-access-tmgp2" (OuterVolumeSpecName: "kube-api-access-tmgp2") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "kube-api-access-tmgp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.871642 4821 scope.go:117] "RemoveContainer" containerID="175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.883258 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.907652 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.922989 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.927578 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-config-data" (OuterVolumeSpecName: "config-data") pod "cc1db45c-1436-429e-8254-d136b94af071" (UID: "cc1db45c-1436-429e-8254-d136b94af071"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.941564 4821 scope.go:117] "RemoveContainer" containerID="91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.950033 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.950067 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.950076 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.950085 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.950093 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc1db45c-1436-429e-8254-d136b94af071-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.950101 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmgp2\" (UniqueName: \"kubernetes.io/projected/cc1db45c-1436-429e-8254-d136b94af071-kube-api-access-tmgp2\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.958185 4821 scope.go:117] "RemoveContainer" containerID="8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae" Mar 09 19:05:14 crc kubenswrapper[4821]: E0309 19:05:14.958589 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae\": container with ID starting with 8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae not found: ID does not exist" containerID="8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.958759 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae"} err="failed to get container status \"8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae\": rpc error: code = NotFound desc = could not find container \"8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae\": container with ID starting with 8d9951f44d2a9de4652696d2793579bf117bab29a5c23b0827a6ffbf2c6050ae not found: ID does not exist" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.958863 4821 scope.go:117] "RemoveContainer" containerID="01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3" Mar 09 19:05:14 crc kubenswrapper[4821]: E0309 19:05:14.959261 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3\": container with ID starting with 01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3 not found: ID does not exist" containerID="01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.959287 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3"} err="failed to get container status \"01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3\": rpc error: code = NotFound desc = could not find container \"01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3\": container with ID starting with 01d811594105f47f70de96ec3cbc351b062d396dffcd97c7c1347d7dc83d52c3 not found: ID does not exist" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.959304 4821 scope.go:117] "RemoveContainer" containerID="175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177" Mar 09 19:05:14 crc kubenswrapper[4821]: E0309 19:05:14.959578 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177\": container with ID starting with 175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177 not found: ID does not exist" containerID="175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.959664 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177"} err="failed to get container status \"175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177\": rpc error: code = NotFound desc = could not find container \"175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177\": container with ID starting with 175dc9f92bcd2f03541da7806cfa84762b4504999b27532e4dbd8ff7e7ee1177 not found: ID does not exist" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.959751 4821 scope.go:117] "RemoveContainer" containerID="91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904" Mar 09 19:05:14 crc kubenswrapper[4821]: E0309 19:05:14.960088 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904\": container with ID starting with 91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904 not found: ID does not exist" containerID="91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904" Mar 09 19:05:14 crc kubenswrapper[4821]: I0309 19:05:14.960126 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904"} err="failed to get container status \"91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904\": rpc error: code = NotFound desc = could not find container \"91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904\": container with ID starting with 91cb6fc192eabe6ce11bdd6323ca79c72802d60a4631d7b1c5a5bc7ce8ad1904 not found: ID does not exist" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.115424 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.138368 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.149891 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:15 crc kubenswrapper[4821]: E0309 19:05:15.150232 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="sg-core" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150250 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="sg-core" Mar 09 19:05:15 crc kubenswrapper[4821]: E0309 19:05:15.150269 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-central-agent" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150277 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-central-agent" Mar 09 19:05:15 crc kubenswrapper[4821]: E0309 19:05:15.150295 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="proxy-httpd" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150302 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="proxy-httpd" Mar 09 19:05:15 crc kubenswrapper[4821]: E0309 19:05:15.150338 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-notification-agent" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150346 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-notification-agent" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150519 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="sg-core" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150532 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-central-agent" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150547 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="ceilometer-notification-agent" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.150558 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1db45c-1436-429e-8254-d136b94af071" containerName="proxy-httpd" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.151961 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.153742 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.154306 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.154526 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.165573 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254028 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-config-data\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254076 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9w7\" (UniqueName: \"kubernetes.io/projected/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-kube-api-access-rb9w7\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254108 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-log-httpd\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254153 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254167 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254225 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-scripts\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254261 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-run-httpd\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.254275 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361000 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-config-data\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361040 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9w7\" (UniqueName: \"kubernetes.io/projected/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-kube-api-access-rb9w7\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361064 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-log-httpd\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361103 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361119 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361169 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-scripts\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-run-httpd\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361220 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.361980 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-run-httpd\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.362412 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-log-httpd\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.365205 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.369792 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-scripts\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.372116 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.373244 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.378342 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-config-data\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.383835 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9w7\" (UniqueName: \"kubernetes.io/projected/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-kube-api-access-rb9w7\") pod \"ceilometer-0\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.469574 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.564082 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1db45c-1436-429e-8254-d136b94af071" path="/var/lib/kubelet/pods/cc1db45c-1436-429e-8254-d136b94af071/volumes" Mar 09 19:05:15 crc kubenswrapper[4821]: I0309 19:05:15.953399 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:15 crc kubenswrapper[4821]: W0309 19:05:15.954789 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd129ef6_f2bc_442d_a7e1_a2100ed32ac7.slice/crio-6238d6497f706351e7ea83334b479749a6729a5959dd19f2fec9229d29055ee1 WatchSource:0}: Error finding container 6238d6497f706351e7ea83334b479749a6729a5959dd19f2fec9229d29055ee1: Status 404 returned error can't find the container with id 6238d6497f706351e7ea83334b479749a6729a5959dd19f2fec9229d29055ee1 Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.490163 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-62qlx"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.497512 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-62qlx"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.524458 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherad92-account-delete-lxwjg"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.528055 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.548118 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherad92-account-delete-lxwjg"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.577474 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jkl\" (UniqueName: \"kubernetes.io/projected/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-kube-api-access-f4jkl\") pod \"watcherad92-account-delete-lxwjg\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.577535 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-operator-scripts\") pod \"watcherad92-account-delete-lxwjg\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.619686 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.619881 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="90305fc1-6810-430e-aa99-60dd72f96f3b" containerName="watcher-decision-engine" containerID="cri-o://663196d5dc08bc94bb0d0f17c704c6eeda1b57a2347b9e74e26f9385f9b61580" gracePeriod=30 Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.683283 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4jkl\" (UniqueName: \"kubernetes.io/projected/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-kube-api-access-f4jkl\") pod \"watcherad92-account-delete-lxwjg\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.683364 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-operator-scripts\") pod \"watcherad92-account-delete-lxwjg\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.684016 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.684148 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-operator-scripts\") pod \"watcherad92-account-delete-lxwjg\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.684243 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-kuttl-api-log" containerID="cri-o://0670ad2bdd2d97138be84b74e6e0fd2dc503be14bc1e162532d58b10fd358006" gracePeriod=30 Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.684567 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-api" containerID="cri-o://70db1049a58f1211ef5f9a404949b007ae1c5193950c96b0e4e9625834c433dc" gracePeriod=30 Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.710082 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.710297 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b220d47b-754f-44af-96d9-022e10d04eca" containerName="watcher-applier" containerID="cri-o://1d1b0bbbb348b5632bf9702642b54e7f54f97e8538f2bcc4e67c4f742df4092b" gracePeriod=30 Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.714101 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4jkl\" (UniqueName: \"kubernetes.io/projected/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-kube-api-access-f4jkl\") pod \"watcherad92-account-delete-lxwjg\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.789243 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerStarted","Data":"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721"} Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.789519 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerStarted","Data":"6238d6497f706351e7ea83334b479749a6729a5959dd19f2fec9229d29055ee1"} Mar 09 19:05:16 crc kubenswrapper[4821]: I0309 19:05:16.931663 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.517086 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherad92-account-delete-lxwjg"] Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.564176 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70261a59-d8c1-4cda-abbd-964027faef2e" path="/var/lib/kubelet/pods/70261a59-d8c1-4cda-abbd-964027faef2e/volumes" Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.798851 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerStarted","Data":"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604"} Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.800975 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" event={"ID":"89dc385d-5828-40c6-86a2-53e2b0f2ad9b","Type":"ContainerStarted","Data":"053692b228347d5e3784a8b515e76cc417ed78184e77822ea121eca819e42108"} Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.803027 4821 generic.go:334] "Generic (PLEG): container finished" podID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerID="70db1049a58f1211ef5f9a404949b007ae1c5193950c96b0e4e9625834c433dc" exitCode=0 Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.803151 4821 generic.go:334] "Generic (PLEG): container finished" podID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerID="0670ad2bdd2d97138be84b74e6e0fd2dc503be14bc1e162532d58b10fd358006" exitCode=143 Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.803242 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d857c38e-6ba2-425a-ac01-2bd92e13589c","Type":"ContainerDied","Data":"70db1049a58f1211ef5f9a404949b007ae1c5193950c96b0e4e9625834c433dc"} Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.803350 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d857c38e-6ba2-425a-ac01-2bd92e13589c","Type":"ContainerDied","Data":"0670ad2bdd2d97138be84b74e6e0fd2dc503be14bc1e162532d58b10fd358006"} Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.804834 4821 generic.go:334] "Generic (PLEG): container finished" podID="b220d47b-754f-44af-96d9-022e10d04eca" containerID="1d1b0bbbb348b5632bf9702642b54e7f54f97e8538f2bcc4e67c4f742df4092b" exitCode=0 Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.804918 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b220d47b-754f-44af-96d9-022e10d04eca","Type":"ContainerDied","Data":"1d1b0bbbb348b5632bf9702642b54e7f54f97e8538f2bcc4e67c4f742df4092b"} Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.804994 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b220d47b-754f-44af-96d9-022e10d04eca","Type":"ContainerDied","Data":"62935e07bb02616c3398eab5701aaa26afb70e0881a44d26e7cbfc26b8245096"} Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.805054 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62935e07bb02616c3398eab5701aaa26afb70e0881a44d26e7cbfc26b8245096" Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.825411 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:17 crc kubenswrapper[4821]: I0309 19:05:17.949811 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.018177 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-cert-memcached-mtls\") pod \"b220d47b-754f-44af-96d9-022e10d04eca\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.018297 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b220d47b-754f-44af-96d9-022e10d04eca-logs\") pod \"b220d47b-754f-44af-96d9-022e10d04eca\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.018341 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-combined-ca-bundle\") pod \"b220d47b-754f-44af-96d9-022e10d04eca\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.018371 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-config-data\") pod \"b220d47b-754f-44af-96d9-022e10d04eca\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.018459 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8krd\" (UniqueName: \"kubernetes.io/projected/b220d47b-754f-44af-96d9-022e10d04eca-kube-api-access-r8krd\") pod \"b220d47b-754f-44af-96d9-022e10d04eca\" (UID: \"b220d47b-754f-44af-96d9-022e10d04eca\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.018949 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b220d47b-754f-44af-96d9-022e10d04eca-logs" (OuterVolumeSpecName: "logs") pod "b220d47b-754f-44af-96d9-022e10d04eca" (UID: "b220d47b-754f-44af-96d9-022e10d04eca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.027992 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b220d47b-754f-44af-96d9-022e10d04eca-kube-api-access-r8krd" (OuterVolumeSpecName: "kube-api-access-r8krd") pod "b220d47b-754f-44af-96d9-022e10d04eca" (UID: "b220d47b-754f-44af-96d9-022e10d04eca"). InnerVolumeSpecName "kube-api-access-r8krd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.053439 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b220d47b-754f-44af-96d9-022e10d04eca" (UID: "b220d47b-754f-44af-96d9-022e10d04eca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.062513 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-config-data" (OuterVolumeSpecName: "config-data") pod "b220d47b-754f-44af-96d9-022e10d04eca" (UID: "b220d47b-754f-44af-96d9-022e10d04eca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.110550 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b220d47b-754f-44af-96d9-022e10d04eca" (UID: "b220d47b-754f-44af-96d9-022e10d04eca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120047 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-combined-ca-bundle\") pod \"d857c38e-6ba2-425a-ac01-2bd92e13589c\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120139 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857c38e-6ba2-425a-ac01-2bd92e13589c-logs\") pod \"d857c38e-6ba2-425a-ac01-2bd92e13589c\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120222 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-config-data\") pod \"d857c38e-6ba2-425a-ac01-2bd92e13589c\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120294 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8z8w\" (UniqueName: \"kubernetes.io/projected/d857c38e-6ba2-425a-ac01-2bd92e13589c-kube-api-access-q8z8w\") pod \"d857c38e-6ba2-425a-ac01-2bd92e13589c\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120382 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-cert-memcached-mtls\") pod \"d857c38e-6ba2-425a-ac01-2bd92e13589c\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120423 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-custom-prometheus-ca\") pod \"d857c38e-6ba2-425a-ac01-2bd92e13589c\" (UID: \"d857c38e-6ba2-425a-ac01-2bd92e13589c\") " Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120815 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8krd\" (UniqueName: \"kubernetes.io/projected/b220d47b-754f-44af-96d9-022e10d04eca-kube-api-access-r8krd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120837 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120850 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b220d47b-754f-44af-96d9-022e10d04eca-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120865 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.120878 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b220d47b-754f-44af-96d9-022e10d04eca-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.121029 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d857c38e-6ba2-425a-ac01-2bd92e13589c-logs" (OuterVolumeSpecName: "logs") pod "d857c38e-6ba2-425a-ac01-2bd92e13589c" (UID: "d857c38e-6ba2-425a-ac01-2bd92e13589c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.126487 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d857c38e-6ba2-425a-ac01-2bd92e13589c-kube-api-access-q8z8w" (OuterVolumeSpecName: "kube-api-access-q8z8w") pod "d857c38e-6ba2-425a-ac01-2bd92e13589c" (UID: "d857c38e-6ba2-425a-ac01-2bd92e13589c"). InnerVolumeSpecName "kube-api-access-q8z8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.164903 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d857c38e-6ba2-425a-ac01-2bd92e13589c" (UID: "d857c38e-6ba2-425a-ac01-2bd92e13589c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.167555 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d857c38e-6ba2-425a-ac01-2bd92e13589c" (UID: "d857c38e-6ba2-425a-ac01-2bd92e13589c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.187486 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-config-data" (OuterVolumeSpecName: "config-data") pod "d857c38e-6ba2-425a-ac01-2bd92e13589c" (UID: "d857c38e-6ba2-425a-ac01-2bd92e13589c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.222742 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.222774 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.222786 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d857c38e-6ba2-425a-ac01-2bd92e13589c-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.222797 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.222806 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8z8w\" (UniqueName: \"kubernetes.io/projected/d857c38e-6ba2-425a-ac01-2bd92e13589c-kube-api-access-q8z8w\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.240095 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d857c38e-6ba2-425a-ac01-2bd92e13589c" (UID: "d857c38e-6ba2-425a-ac01-2bd92e13589c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.324377 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d857c38e-6ba2-425a-ac01-2bd92e13589c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.812576 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" event={"ID":"89dc385d-5828-40c6-86a2-53e2b0f2ad9b","Type":"ContainerDied","Data":"bbf41a98e9a221c04d5e5acc3d9145917b59d65cf1ce36493189851b51caee25"} Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.812543 4821 generic.go:334] "Generic (PLEG): container finished" podID="89dc385d-5828-40c6-86a2-53e2b0f2ad9b" containerID="bbf41a98e9a221c04d5e5acc3d9145917b59d65cf1ce36493189851b51caee25" exitCode=0 Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.815610 4821 generic.go:334] "Generic (PLEG): container finished" podID="90305fc1-6810-430e-aa99-60dd72f96f3b" containerID="663196d5dc08bc94bb0d0f17c704c6eeda1b57a2347b9e74e26f9385f9b61580" exitCode=0 Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.815656 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"90305fc1-6810-430e-aa99-60dd72f96f3b","Type":"ContainerDied","Data":"663196d5dc08bc94bb0d0f17c704c6eeda1b57a2347b9e74e26f9385f9b61580"} Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.817504 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d857c38e-6ba2-425a-ac01-2bd92e13589c","Type":"ContainerDied","Data":"737392709ea5122b968664fb432bef9935fd8054567438348779549225acdb2d"} Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.817534 4821 scope.go:117] "RemoveContainer" containerID="70db1049a58f1211ef5f9a404949b007ae1c5193950c96b0e4e9625834c433dc" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.817648 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.833444 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.840782 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerStarted","Data":"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b"} Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.873788 4821 scope.go:117] "RemoveContainer" containerID="0670ad2bdd2d97138be84b74e6e0fd2dc503be14bc1e162532d58b10fd358006" Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.881818 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.900383 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.909370 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.916206 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:18 crc kubenswrapper[4821]: I0309 19:05:18.952480 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.150186 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-custom-prometheus-ca\") pod \"90305fc1-6810-430e-aa99-60dd72f96f3b\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.150256 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90305fc1-6810-430e-aa99-60dd72f96f3b-logs\") pod \"90305fc1-6810-430e-aa99-60dd72f96f3b\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.150386 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjccs\" (UniqueName: \"kubernetes.io/projected/90305fc1-6810-430e-aa99-60dd72f96f3b-kube-api-access-zjccs\") pod \"90305fc1-6810-430e-aa99-60dd72f96f3b\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.150405 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-combined-ca-bundle\") pod \"90305fc1-6810-430e-aa99-60dd72f96f3b\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.150497 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-cert-memcached-mtls\") pod \"90305fc1-6810-430e-aa99-60dd72f96f3b\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.150517 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-config-data\") pod \"90305fc1-6810-430e-aa99-60dd72f96f3b\" (UID: \"90305fc1-6810-430e-aa99-60dd72f96f3b\") " Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.151037 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90305fc1-6810-430e-aa99-60dd72f96f3b-logs" (OuterVolumeSpecName: "logs") pod "90305fc1-6810-430e-aa99-60dd72f96f3b" (UID: "90305fc1-6810-430e-aa99-60dd72f96f3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.154376 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90305fc1-6810-430e-aa99-60dd72f96f3b-kube-api-access-zjccs" (OuterVolumeSpecName: "kube-api-access-zjccs") pod "90305fc1-6810-430e-aa99-60dd72f96f3b" (UID: "90305fc1-6810-430e-aa99-60dd72f96f3b"). InnerVolumeSpecName "kube-api-access-zjccs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.170358 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "90305fc1-6810-430e-aa99-60dd72f96f3b" (UID: "90305fc1-6810-430e-aa99-60dd72f96f3b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.174546 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90305fc1-6810-430e-aa99-60dd72f96f3b" (UID: "90305fc1-6810-430e-aa99-60dd72f96f3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.204492 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-config-data" (OuterVolumeSpecName: "config-data") pod "90305fc1-6810-430e-aa99-60dd72f96f3b" (UID: "90305fc1-6810-430e-aa99-60dd72f96f3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.244536 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "90305fc1-6810-430e-aa99-60dd72f96f3b" (UID: "90305fc1-6810-430e-aa99-60dd72f96f3b"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.252657 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.252894 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90305fc1-6810-430e-aa99-60dd72f96f3b-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.252956 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.253013 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjccs\" (UniqueName: \"kubernetes.io/projected/90305fc1-6810-430e-aa99-60dd72f96f3b-kube-api-access-zjccs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.253090 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.253155 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90305fc1-6810-430e-aa99-60dd72f96f3b-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.363644 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.561221 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b220d47b-754f-44af-96d9-022e10d04eca" path="/var/lib/kubelet/pods/b220d47b-754f-44af-96d9-022e10d04eca/volumes" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.561902 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" path="/var/lib/kubelet/pods/d857c38e-6ba2-425a-ac01-2bd92e13589c/volumes" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.841740 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"90305fc1-6810-430e-aa99-60dd72f96f3b","Type":"ContainerDied","Data":"78424e0fa573c89fd7368d429f82c0cb53034b8418cd64f2534d5fb71d4cdb47"} Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.842021 4821 scope.go:117] "RemoveContainer" containerID="663196d5dc08bc94bb0d0f17c704c6eeda1b57a2347b9e74e26f9385f9b61580" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.841771 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.869916 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:19 crc kubenswrapper[4821]: I0309 19:05:19.878044 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.279071 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.470655 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4jkl\" (UniqueName: \"kubernetes.io/projected/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-kube-api-access-f4jkl\") pod \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.471077 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-operator-scripts\") pod \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\" (UID: \"89dc385d-5828-40c6-86a2-53e2b0f2ad9b\") " Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.471715 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89dc385d-5828-40c6-86a2-53e2b0f2ad9b" (UID: "89dc385d-5828-40c6-86a2-53e2b0f2ad9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.475451 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-kube-api-access-f4jkl" (OuterVolumeSpecName: "kube-api-access-f4jkl") pod "89dc385d-5828-40c6-86a2-53e2b0f2ad9b" (UID: "89dc385d-5828-40c6-86a2-53e2b0f2ad9b"). InnerVolumeSpecName "kube-api-access-f4jkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.573293 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4jkl\" (UniqueName: \"kubernetes.io/projected/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-kube-api-access-f4jkl\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.573609 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89dc385d-5828-40c6-86a2-53e2b0f2ad9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.852557 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.852606 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherad92-account-delete-lxwjg" event={"ID":"89dc385d-5828-40c6-86a2-53e2b0f2ad9b","Type":"ContainerDied","Data":"053692b228347d5e3784a8b515e76cc417ed78184e77822ea121eca819e42108"} Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.852641 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053692b228347d5e3784a8b515e76cc417ed78184e77822ea121eca819e42108" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.857016 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerStarted","Data":"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e"} Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.857171 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-central-agent" containerID="cri-o://826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" gracePeriod=30 Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.857477 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.857738 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="proxy-httpd" containerID="cri-o://ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" gracePeriod=30 Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.857795 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="sg-core" containerID="cri-o://b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" gracePeriod=30 Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.857831 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-notification-agent" containerID="cri-o://359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" gracePeriod=30 Mar 09 19:05:20 crc kubenswrapper[4821]: I0309 19:05:20.883622 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.6752104970000001 podStartE2EDuration="5.883602522s" podCreationTimestamp="2026-03-09 19:05:15 +0000 UTC" firstStartedPulling="2026-03-09 19:05:15.957036142 +0000 UTC m=+2453.118411998" lastFinishedPulling="2026-03-09 19:05:20.165428167 +0000 UTC m=+2457.326804023" observedRunningTime="2026-03-09 19:05:20.880458246 +0000 UTC m=+2458.041834102" watchObservedRunningTime="2026-03-09 19:05:20.883602522 +0000 UTC m=+2458.044978378" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.563332 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90305fc1-6810-430e-aa99-60dd72f96f3b" path="/var/lib/kubelet/pods/90305fc1-6810-430e-aa99-60dd72f96f3b/volumes" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.564375 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z7kvb"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.565683 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-z7kvb"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.589222 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherad92-account-delete-lxwjg"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.597736 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.604373 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherad92-account-delete-lxwjg"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.611958 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-ad92-account-create-update-hzsg6"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.648140 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719466 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-scripts\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719514 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-sg-core-conf-yaml\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719564 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-combined-ca-bundle\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719641 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-ceilometer-tls-certs\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719684 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-run-httpd\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719714 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-config-data\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719800 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb9w7\" (UniqueName: \"kubernetes.io/projected/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-kube-api-access-rb9w7\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.719860 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-log-httpd\") pod \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\" (UID: \"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7\") " Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.720446 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.720885 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.724582 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-scripts" (OuterVolumeSpecName: "scripts") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.724847 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-kube-api-access-rb9w7" (OuterVolumeSpecName: "kube-api-access-rb9w7") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "kube-api-access-rb9w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.744466 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.786791 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.803611 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.807513 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-config-data" (OuterVolumeSpecName: "config-data") pod "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" (UID: "bd129ef6-f2bc-442d-a7e1-a2100ed32ac7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.821670 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.821866 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.821926 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.821982 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.822116 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.822187 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.822271 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb9w7\" (UniqueName: \"kubernetes.io/projected/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-kube-api-access-rb9w7\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.822355 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870282 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" exitCode=0 Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870679 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" exitCode=2 Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870688 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" exitCode=0 Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870694 4821 generic.go:334] "Generic (PLEG): container finished" podID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" exitCode=0 Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870347 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerDied","Data":"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e"} Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870367 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870830 4821 scope.go:117] "RemoveContainer" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870794 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerDied","Data":"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b"} Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870931 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerDied","Data":"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604"} Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870942 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerDied","Data":"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721"} Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.870953 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bd129ef6-f2bc-442d-a7e1-a2100ed32ac7","Type":"ContainerDied","Data":"6238d6497f706351e7ea83334b479749a6729a5959dd19f2fec9229d29055ee1"} Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.905948 4821 scope.go:117] "RemoveContainer" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.916955 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.929563 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.940409 4821 scope.go:117] "RemoveContainer" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950010 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950402 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89dc385d-5828-40c6-86a2-53e2b0f2ad9b" containerName="mariadb-account-delete" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950423 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="89dc385d-5828-40c6-86a2-53e2b0f2ad9b" containerName="mariadb-account-delete" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950444 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-kuttl-api-log" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950452 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-kuttl-api-log" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950474 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90305fc1-6810-430e-aa99-60dd72f96f3b" containerName="watcher-decision-engine" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950481 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="90305fc1-6810-430e-aa99-60dd72f96f3b" containerName="watcher-decision-engine" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950497 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="proxy-httpd" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950504 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="proxy-httpd" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950523 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-api" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950531 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-api" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950542 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-notification-agent" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950549 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-notification-agent" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950559 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b220d47b-754f-44af-96d9-022e10d04eca" containerName="watcher-applier" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950566 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b220d47b-754f-44af-96d9-022e10d04eca" containerName="watcher-applier" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950578 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="sg-core" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950585 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="sg-core" Mar 09 19:05:21 crc kubenswrapper[4821]: E0309 19:05:21.950599 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-central-agent" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950606 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-central-agent" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950779 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-central-agent" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950816 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="ceilometer-notification-agent" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950834 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="89dc385d-5828-40c6-86a2-53e2b0f2ad9b" containerName="mariadb-account-delete" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950849 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-api" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950858 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="sg-core" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950869 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b220d47b-754f-44af-96d9-022e10d04eca" containerName="watcher-applier" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950876 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d857c38e-6ba2-425a-ac01-2bd92e13589c" containerName="watcher-kuttl-api-log" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950891 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="90305fc1-6810-430e-aa99-60dd72f96f3b" containerName="watcher-decision-engine" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.950905 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" containerName="proxy-httpd" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.953945 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.960788 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.960901 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.960943 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.968662 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:21 crc kubenswrapper[4821]: I0309 19:05:21.978749 4821 scope.go:117] "RemoveContainer" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026209 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwns\" (UniqueName: \"kubernetes.io/projected/612aac74-39c0-4091-ac1d-b47512ee620a-kube-api-access-xkwns\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026278 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026309 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026350 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-log-httpd\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026364 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-run-httpd\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026381 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026404 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-scripts\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.026421 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-config-data\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.027674 4821 scope.go:117] "RemoveContainer" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" Mar 09 19:05:22 crc kubenswrapper[4821]: E0309 19:05:22.028143 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": container with ID starting with ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e not found: ID does not exist" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.028174 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e"} err="failed to get container status \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": rpc error: code = NotFound desc = could not find container \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": container with ID starting with ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.028197 4821 scope.go:117] "RemoveContainer" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" Mar 09 19:05:22 crc kubenswrapper[4821]: E0309 19:05:22.028485 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": container with ID starting with b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b not found: ID does not exist" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.028508 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b"} err="failed to get container status \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": rpc error: code = NotFound desc = could not find container \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": container with ID starting with b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.028524 4821 scope.go:117] "RemoveContainer" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" Mar 09 19:05:22 crc kubenswrapper[4821]: E0309 19:05:22.028757 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": container with ID starting with 359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604 not found: ID does not exist" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.028839 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604"} err="failed to get container status \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": rpc error: code = NotFound desc = could not find container \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": container with ID starting with 359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.028911 4821 scope.go:117] "RemoveContainer" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" Mar 09 19:05:22 crc kubenswrapper[4821]: E0309 19:05:22.029200 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": container with ID starting with 826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721 not found: ID does not exist" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.029224 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721"} err="failed to get container status \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": rpc error: code = NotFound desc = could not find container \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": container with ID starting with 826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.029241 4821 scope.go:117] "RemoveContainer" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.029466 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e"} err="failed to get container status \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": rpc error: code = NotFound desc = could not find container \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": container with ID starting with ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.029545 4821 scope.go:117] "RemoveContainer" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.029861 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b"} err="failed to get container status \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": rpc error: code = NotFound desc = could not find container \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": container with ID starting with b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.029935 4821 scope.go:117] "RemoveContainer" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.030459 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604"} err="failed to get container status \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": rpc error: code = NotFound desc = could not find container \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": container with ID starting with 359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.030481 4821 scope.go:117] "RemoveContainer" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.030688 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721"} err="failed to get container status \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": rpc error: code = NotFound desc = could not find container \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": container with ID starting with 826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.030771 4821 scope.go:117] "RemoveContainer" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.031060 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e"} err="failed to get container status \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": rpc error: code = NotFound desc = could not find container \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": container with ID starting with ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.031080 4821 scope.go:117] "RemoveContainer" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.031361 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b"} err="failed to get container status \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": rpc error: code = NotFound desc = could not find container \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": container with ID starting with b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.031436 4821 scope.go:117] "RemoveContainer" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.031804 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604"} err="failed to get container status \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": rpc error: code = NotFound desc = could not find container \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": container with ID starting with 359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.031824 4821 scope.go:117] "RemoveContainer" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032042 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721"} err="failed to get container status \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": rpc error: code = NotFound desc = could not find container \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": container with ID starting with 826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032061 4821 scope.go:117] "RemoveContainer" containerID="ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032238 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e"} err="failed to get container status \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": rpc error: code = NotFound desc = could not find container \"ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e\": container with ID starting with ef8d87925f709ec8b83b7e24b188635ae02f423cf8e15e4e49c87625416e122e not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032254 4821 scope.go:117] "RemoveContainer" containerID="b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032488 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b"} err="failed to get container status \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": rpc error: code = NotFound desc = could not find container \"b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b\": container with ID starting with b16e09759ff1d9de62f4b881f11a5397f3c96cbfdc027525ff898baaf6bf293b not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032558 4821 scope.go:117] "RemoveContainer" containerID="359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032803 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604"} err="failed to get container status \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": rpc error: code = NotFound desc = could not find container \"359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604\": container with ID starting with 359aa44545fa2794f5c26d9b929ca62004c83b0bdfa951c4807380cd9e710604 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.032823 4821 scope.go:117] "RemoveContainer" containerID="826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.033038 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721"} err="failed to get container status \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": rpc error: code = NotFound desc = could not find container \"826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721\": container with ID starting with 826c5921ee8b342f9c1afe35988a0f34709b382512348a582d709c704bb83721 not found: ID does not exist" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128215 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-log-httpd\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128279 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-run-httpd\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128340 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128389 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-scripts\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128419 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-config-data\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128549 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwns\" (UniqueName: \"kubernetes.io/projected/612aac74-39c0-4091-ac1d-b47512ee620a-kube-api-access-xkwns\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128574 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.128605 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.129365 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-run-httpd\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.129859 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-log-httpd\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.133086 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.133409 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.133832 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-config-data\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.134013 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-scripts\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.135240 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.152105 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwns\" (UniqueName: \"kubernetes.io/projected/612aac74-39c0-4091-ac1d-b47512ee620a-kube-api-access-xkwns\") pod \"ceilometer-0\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.284055 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.531027 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftmx8"] Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.539760 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.544396 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftmx8"] Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.632304 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f"] Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.633496 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.636361 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngklh\" (UniqueName: \"kubernetes.io/projected/bfa54031-dc56-46bc-b18d-63e0437e1ce3-kube-api-access-ngklh\") pod \"watcher-db-create-ftmx8\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.636526 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa54031-dc56-46bc-b18d-63e0437e1ce3-operator-scripts\") pod \"watcher-db-create-ftmx8\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.638256 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.646864 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f"] Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.738371 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-operator-scripts\") pod \"watcher-d25c-account-create-update-nbp6f\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.738468 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngklh\" (UniqueName: \"kubernetes.io/projected/bfa54031-dc56-46bc-b18d-63e0437e1ce3-kube-api-access-ngklh\") pod \"watcher-db-create-ftmx8\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.738518 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdhqs\" (UniqueName: \"kubernetes.io/projected/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-kube-api-access-zdhqs\") pod \"watcher-d25c-account-create-update-nbp6f\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.738552 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa54031-dc56-46bc-b18d-63e0437e1ce3-operator-scripts\") pod \"watcher-db-create-ftmx8\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.739227 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa54031-dc56-46bc-b18d-63e0437e1ce3-operator-scripts\") pod \"watcher-db-create-ftmx8\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.754503 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngklh\" (UniqueName: \"kubernetes.io/projected/bfa54031-dc56-46bc-b18d-63e0437e1ce3-kube-api-access-ngklh\") pod \"watcher-db-create-ftmx8\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: W0309 19:05:22.774028 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod612aac74_39c0_4091_ac1d_b47512ee620a.slice/crio-923e4e3de1f33056d77ee36f9a3a6beeb953a6320dfa8676586546f07df68646 WatchSource:0}: Error finding container 923e4e3de1f33056d77ee36f9a3a6beeb953a6320dfa8676586546f07df68646: Status 404 returned error can't find the container with id 923e4e3de1f33056d77ee36f9a3a6beeb953a6320dfa8676586546f07df68646 Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.780148 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.839832 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-operator-scripts\") pod \"watcher-d25c-account-create-update-nbp6f\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.840019 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdhqs\" (UniqueName: \"kubernetes.io/projected/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-kube-api-access-zdhqs\") pod \"watcher-d25c-account-create-update-nbp6f\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.840754 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-operator-scripts\") pod \"watcher-d25c-account-create-update-nbp6f\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.857011 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdhqs\" (UniqueName: \"kubernetes.io/projected/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-kube-api-access-zdhqs\") pod \"watcher-d25c-account-create-update-nbp6f\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.859756 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.882413 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerStarted","Data":"923e4e3de1f33056d77ee36f9a3a6beeb953a6320dfa8676586546f07df68646"} Mar 09 19:05:22 crc kubenswrapper[4821]: I0309 19:05:22.982832 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:23 crc kubenswrapper[4821]: I0309 19:05:23.583029 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b8efea-c318-4f79-8727-2955f5815eda" path="/var/lib/kubelet/pods/77b8efea-c318-4f79-8727-2955f5815eda/volumes" Mar 09 19:05:23 crc kubenswrapper[4821]: I0309 19:05:23.584296 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89dc385d-5828-40c6-86a2-53e2b0f2ad9b" path="/var/lib/kubelet/pods/89dc385d-5828-40c6-86a2-53e2b0f2ad9b/volumes" Mar 09 19:05:23 crc kubenswrapper[4821]: I0309 19:05:23.585034 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd129ef6-f2bc-442d-a7e1-a2100ed32ac7" path="/var/lib/kubelet/pods/bd129ef6-f2bc-442d-a7e1-a2100ed32ac7/volumes" Mar 09 19:05:23 crc kubenswrapper[4821]: I0309 19:05:23.586540 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e506d5ec-c6e6-4f26-ac22-9f545ca55ff9" path="/var/lib/kubelet/pods/e506d5ec-c6e6-4f26-ac22-9f545ca55ff9/volumes" Mar 09 19:05:23 crc kubenswrapper[4821]: I0309 19:05:23.904092 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f"] Mar 09 19:05:23 crc kubenswrapper[4821]: I0309 19:05:23.913794 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.100689 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftmx8"] Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.902964 4821 generic.go:334] "Generic (PLEG): container finished" podID="4596b6a6-f94c-4ec2-825c-ff6acc262fe9" containerID="7c516a1e24ff07a5e59aaeb17ab65885c5da2a3e3e8a914e3953ec8141440a6b" exitCode=0 Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.903171 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" event={"ID":"4596b6a6-f94c-4ec2-825c-ff6acc262fe9","Type":"ContainerDied","Data":"7c516a1e24ff07a5e59aaeb17ab65885c5da2a3e3e8a914e3953ec8141440a6b"} Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.903292 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" event={"ID":"4596b6a6-f94c-4ec2-825c-ff6acc262fe9","Type":"ContainerStarted","Data":"9f7dda1397a5f23bd1c6ec270b5bef6bed17ae54468530ca8b513b3944280bc2"} Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.905015 4821 generic.go:334] "Generic (PLEG): container finished" podID="bfa54031-dc56-46bc-b18d-63e0437e1ce3" containerID="3a87811a895d48f6a5afffc6c42efd61c1b8deb985bcfd5286f2da1252a941a9" exitCode=0 Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.905077 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-ftmx8" event={"ID":"bfa54031-dc56-46bc-b18d-63e0437e1ce3","Type":"ContainerDied","Data":"3a87811a895d48f6a5afffc6c42efd61c1b8deb985bcfd5286f2da1252a941a9"} Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.905099 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-ftmx8" event={"ID":"bfa54031-dc56-46bc-b18d-63e0437e1ce3","Type":"ContainerStarted","Data":"eef2c2fb99851444a847154b751e9142050b73a86c1267b89ebe02d7a640567a"} Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.907969 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerStarted","Data":"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98"} Mar 09 19:05:24 crc kubenswrapper[4821]: I0309 19:05:24.908016 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerStarted","Data":"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9"} Mar 09 19:05:25 crc kubenswrapper[4821]: I0309 19:05:25.920748 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerStarted","Data":"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc"} Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.456591 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.464629 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.538076 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-operator-scripts\") pod \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.538129 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa54031-dc56-46bc-b18d-63e0437e1ce3-operator-scripts\") pod \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.538177 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngklh\" (UniqueName: \"kubernetes.io/projected/bfa54031-dc56-46bc-b18d-63e0437e1ce3-kube-api-access-ngklh\") pod \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\" (UID: \"bfa54031-dc56-46bc-b18d-63e0437e1ce3\") " Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.538268 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdhqs\" (UniqueName: \"kubernetes.io/projected/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-kube-api-access-zdhqs\") pod \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\" (UID: \"4596b6a6-f94c-4ec2-825c-ff6acc262fe9\") " Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.538595 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4596b6a6-f94c-4ec2-825c-ff6acc262fe9" (UID: "4596b6a6-f94c-4ec2-825c-ff6acc262fe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.538962 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa54031-dc56-46bc-b18d-63e0437e1ce3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfa54031-dc56-46bc-b18d-63e0437e1ce3" (UID: "bfa54031-dc56-46bc-b18d-63e0437e1ce3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.543529 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-kube-api-access-zdhqs" (OuterVolumeSpecName: "kube-api-access-zdhqs") pod "4596b6a6-f94c-4ec2-825c-ff6acc262fe9" (UID: "4596b6a6-f94c-4ec2-825c-ff6acc262fe9"). InnerVolumeSpecName "kube-api-access-zdhqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.546693 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa54031-dc56-46bc-b18d-63e0437e1ce3-kube-api-access-ngklh" (OuterVolumeSpecName: "kube-api-access-ngklh") pod "bfa54031-dc56-46bc-b18d-63e0437e1ce3" (UID: "bfa54031-dc56-46bc-b18d-63e0437e1ce3"). InnerVolumeSpecName "kube-api-access-ngklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.640335 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngklh\" (UniqueName: \"kubernetes.io/projected/bfa54031-dc56-46bc-b18d-63e0437e1ce3-kube-api-access-ngklh\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.640376 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdhqs\" (UniqueName: \"kubernetes.io/projected/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-kube-api-access-zdhqs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.640390 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4596b6a6-f94c-4ec2-825c-ff6acc262fe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.640399 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa54031-dc56-46bc-b18d-63e0437e1ce3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.932791 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" event={"ID":"4596b6a6-f94c-4ec2-825c-ff6acc262fe9","Type":"ContainerDied","Data":"9f7dda1397a5f23bd1c6ec270b5bef6bed17ae54468530ca8b513b3944280bc2"} Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.932848 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f7dda1397a5f23bd1c6ec270b5bef6bed17ae54468530ca8b513b3944280bc2" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.932921 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.937133 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-ftmx8" event={"ID":"bfa54031-dc56-46bc-b18d-63e0437e1ce3","Type":"ContainerDied","Data":"eef2c2fb99851444a847154b751e9142050b73a86c1267b89ebe02d7a640567a"} Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.937176 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef2c2fb99851444a847154b751e9142050b73a86c1267b89ebe02d7a640567a" Mar 09 19:05:26 crc kubenswrapper[4821]: I0309 19:05:26.937238 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-ftmx8" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.950339 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerStarted","Data":"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1"} Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.951703 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.981583 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-44kqp"] Mar 09 19:05:27 crc kubenswrapper[4821]: E0309 19:05:27.982098 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa54031-dc56-46bc-b18d-63e0437e1ce3" containerName="mariadb-database-create" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.982122 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa54031-dc56-46bc-b18d-63e0437e1ce3" containerName="mariadb-database-create" Mar 09 19:05:27 crc kubenswrapper[4821]: E0309 19:05:27.982139 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4596b6a6-f94c-4ec2-825c-ff6acc262fe9" containerName="mariadb-account-create-update" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.982147 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4596b6a6-f94c-4ec2-825c-ff6acc262fe9" containerName="mariadb-account-create-update" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.982330 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4596b6a6-f94c-4ec2-825c-ff6acc262fe9" containerName="mariadb-account-create-update" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.982354 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa54031-dc56-46bc-b18d-63e0437e1ce3" containerName="mariadb-database-create" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.982969 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.984944 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.990106 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-hh7g9" Mar 09 19:05:27 crc kubenswrapper[4821]: I0309 19:05:27.991557 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-44kqp"] Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.008859 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.882227226 podStartE2EDuration="7.008832293s" podCreationTimestamp="2026-03-09 19:05:21 +0000 UTC" firstStartedPulling="2026-03-09 19:05:22.776080096 +0000 UTC m=+2459.937455952" lastFinishedPulling="2026-03-09 19:05:26.902685133 +0000 UTC m=+2464.064061019" observedRunningTime="2026-03-09 19:05:27.990404444 +0000 UTC m=+2465.151780300" watchObservedRunningTime="2026-03-09 19:05:28.008832293 +0000 UTC m=+2465.170208159" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.059365 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-db-sync-config-data\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.059552 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdrc\" (UniqueName: \"kubernetes.io/projected/6e960af1-5e85-4364-a986-be14476acab4-kube-api-access-5rdrc\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.059671 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.059732 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-config-data\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.161081 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdrc\" (UniqueName: \"kubernetes.io/projected/6e960af1-5e85-4364-a986-be14476acab4-kube-api-access-5rdrc\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.161153 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.161188 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-config-data\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.161241 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-db-sync-config-data\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.165840 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-db-sync-config-data\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.166515 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.166913 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-config-data\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.183985 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdrc\" (UniqueName: \"kubernetes.io/projected/6e960af1-5e85-4364-a986-be14476acab4-kube-api-access-5rdrc\") pod \"watcher-kuttl-db-sync-44kqp\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.302757 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.806369 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-44kqp"] Mar 09 19:05:28 crc kubenswrapper[4821]: I0309 19:05:28.959109 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" event={"ID":"6e960af1-5e85-4364-a986-be14476acab4","Type":"ContainerStarted","Data":"77f482a1369817384b19aaac8e73524c10afbe946551f02e405a52cc7251f71a"} Mar 09 19:05:29 crc kubenswrapper[4821]: I0309 19:05:29.913362 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:05:29 crc kubenswrapper[4821]: I0309 19:05:29.913725 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:05:29 crc kubenswrapper[4821]: I0309 19:05:29.985482 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" event={"ID":"6e960af1-5e85-4364-a986-be14476acab4","Type":"ContainerStarted","Data":"f43d67e7257908b5a02f5391464da6fe1a5c097581d0630c883e2097135fbcb7"} Mar 09 19:05:32 crc kubenswrapper[4821]: I0309 19:05:32.000873 4821 generic.go:334] "Generic (PLEG): container finished" podID="6e960af1-5e85-4364-a986-be14476acab4" containerID="f43d67e7257908b5a02f5391464da6fe1a5c097581d0630c883e2097135fbcb7" exitCode=0 Mar 09 19:05:32 crc kubenswrapper[4821]: I0309 19:05:32.000971 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" event={"ID":"6e960af1-5e85-4364-a986-be14476acab4","Type":"ContainerDied","Data":"f43d67e7257908b5a02f5391464da6fe1a5c097581d0630c883e2097135fbcb7"} Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.380649 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.545371 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-combined-ca-bundle\") pod \"6e960af1-5e85-4364-a986-be14476acab4\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.545475 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rdrc\" (UniqueName: \"kubernetes.io/projected/6e960af1-5e85-4364-a986-be14476acab4-kube-api-access-5rdrc\") pod \"6e960af1-5e85-4364-a986-be14476acab4\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.545626 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-db-sync-config-data\") pod \"6e960af1-5e85-4364-a986-be14476acab4\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.545675 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-config-data\") pod \"6e960af1-5e85-4364-a986-be14476acab4\" (UID: \"6e960af1-5e85-4364-a986-be14476acab4\") " Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.558538 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e960af1-5e85-4364-a986-be14476acab4-kube-api-access-5rdrc" (OuterVolumeSpecName: "kube-api-access-5rdrc") pod "6e960af1-5e85-4364-a986-be14476acab4" (UID: "6e960af1-5e85-4364-a986-be14476acab4"). InnerVolumeSpecName "kube-api-access-5rdrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.560870 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6e960af1-5e85-4364-a986-be14476acab4" (UID: "6e960af1-5e85-4364-a986-be14476acab4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.587475 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e960af1-5e85-4364-a986-be14476acab4" (UID: "6e960af1-5e85-4364-a986-be14476acab4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.629338 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-config-data" (OuterVolumeSpecName: "config-data") pod "6e960af1-5e85-4364-a986-be14476acab4" (UID: "6e960af1-5e85-4364-a986-be14476acab4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.647891 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.647931 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.647939 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e960af1-5e85-4364-a986-be14476acab4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:33 crc kubenswrapper[4821]: I0309 19:05:33.647948 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rdrc\" (UniqueName: \"kubernetes.io/projected/6e960af1-5e85-4364-a986-be14476acab4-kube-api-access-5rdrc\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.030645 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" event={"ID":"6e960af1-5e85-4364-a986-be14476acab4","Type":"ContainerDied","Data":"77f482a1369817384b19aaac8e73524c10afbe946551f02e405a52cc7251f71a"} Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.030724 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f482a1369817384b19aaac8e73524c10afbe946551f02e405a52cc7251f71a" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.030973 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-44kqp" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.649490 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:34 crc kubenswrapper[4821]: E0309 19:05:34.649826 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e960af1-5e85-4364-a986-be14476acab4" containerName="watcher-kuttl-db-sync" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.649839 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e960af1-5e85-4364-a986-be14476acab4" containerName="watcher-kuttl-db-sync" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.650013 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e960af1-5e85-4364-a986-be14476acab4" containerName="watcher-kuttl-db-sync" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.650581 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.662115 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.663370 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: W0309 19:05:34.665459 4821 reflector.go:561] object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data": failed to list *v1.Secret: secrets "watcher-kuttl-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "watcher-kuttl-default": no relationship found between node 'crc' and this object Mar 09 19:05:34 crc kubenswrapper[4821]: E0309 19:05:34.665523 4821 reflector.go:158] "Unhandled Error" err="object-\"watcher-kuttl-default\"/\"watcher-kuttl-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"watcher-kuttl-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"watcher-kuttl-default\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.665845 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-hh7g9" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.665862 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.678190 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.690598 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.756141 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.757179 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.759106 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.772884 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778649 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778693 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778719 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e4d716-413c-4631-a5ea-e459707d78a3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778735 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778765 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778792 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq24w\" (UniqueName: \"kubernetes.io/projected/94909197-ed08-41db-ac95-ad9bfcd5df75-kube-api-access-jq24w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778868 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778913 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778936 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.778974 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779010 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2jjt\" (UniqueName: \"kubernetes.io/projected/258f22b8-2e89-409e-bd55-5e94f0c5d861-kube-api-access-t2jjt\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779034 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779071 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94909197-ed08-41db-ac95-ad9bfcd5df75-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779093 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779160 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779258 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/258f22b8-2e89-409e-bd55-5e94f0c5d861-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.779312 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46pd\" (UniqueName: \"kubernetes.io/projected/77e4d716-413c-4631-a5ea-e459707d78a3-kube-api-access-j46pd\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880490 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880530 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880551 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2jjt\" (UniqueName: \"kubernetes.io/projected/258f22b8-2e89-409e-bd55-5e94f0c5d861-kube-api-access-t2jjt\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880577 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94909197-ed08-41db-ac95-ad9bfcd5df75-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880597 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880615 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.880754 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/258f22b8-2e89-409e-bd55-5e94f0c5d861-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881163 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46pd\" (UniqueName: \"kubernetes.io/projected/77e4d716-413c-4631-a5ea-e459707d78a3-kube-api-access-j46pd\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881193 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94909197-ed08-41db-ac95-ad9bfcd5df75-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881206 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881285 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/258f22b8-2e89-409e-bd55-5e94f0c5d861-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881360 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881565 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e4d716-413c-4631-a5ea-e459707d78a3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881590 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881612 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881634 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq24w\" (UniqueName: \"kubernetes.io/projected/94909197-ed08-41db-ac95-ad9bfcd5df75-kube-api-access-jq24w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881675 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881699 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.881715 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.882574 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e4d716-413c-4631-a5ea-e459707d78a3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.884435 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.884607 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.885146 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.885185 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.885499 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.885954 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.889101 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.899045 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.899376 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.901407 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2jjt\" (UniqueName: \"kubernetes.io/projected/258f22b8-2e89-409e-bd55-5e94f0c5d861-kube-api-access-t2jjt\") pod \"watcher-kuttl-applier-0\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.902125 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46pd\" (UniqueName: \"kubernetes.io/projected/77e4d716-413c-4631-a5ea-e459707d78a3-kube-api-access-j46pd\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.902822 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq24w\" (UniqueName: \"kubernetes.io/projected/94909197-ed08-41db-ac95-ad9bfcd5df75-kube-api-access-jq24w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.903710 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:34 crc kubenswrapper[4821]: I0309 19:05:34.971821 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:35 crc kubenswrapper[4821]: I0309 19:05:35.071707 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:35 crc kubenswrapper[4821]: I0309 19:05:35.472008 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:35 crc kubenswrapper[4821]: I0309 19:05:35.589729 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:35 crc kubenswrapper[4821]: I0309 19:05:35.600609 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:05:35 crc kubenswrapper[4821]: I0309 19:05:35.604284 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:35 crc kubenswrapper[4821]: I0309 19:05:35.887545 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.053513 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"94909197-ed08-41db-ac95-ad9bfcd5df75","Type":"ContainerStarted","Data":"ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac"} Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.053551 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"94909197-ed08-41db-ac95-ad9bfcd5df75","Type":"ContainerStarted","Data":"1eba9bbb17180621f1d9aaef96770528d891aa7c78ec3d4d3dc83c415dfba553"} Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.068820 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"258f22b8-2e89-409e-bd55-5e94f0c5d861","Type":"ContainerStarted","Data":"9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a"} Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.068884 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"258f22b8-2e89-409e-bd55-5e94f0c5d861","Type":"ContainerStarted","Data":"8bd6d00afa86434a4c760597b6a893816b59f999bff65bd9e961a07b726ca810"} Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.084090 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.084065334 podStartE2EDuration="2.084065334s" podCreationTimestamp="2026-03-09 19:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:05:36.075884312 +0000 UTC m=+2473.237260168" watchObservedRunningTime="2026-03-09 19:05:36.084065334 +0000 UTC m=+2473.245441200" Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.103971 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.103947463 podStartE2EDuration="2.103947463s" podCreationTimestamp="2026-03-09 19:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:05:36.095271338 +0000 UTC m=+2473.256647194" watchObservedRunningTime="2026-03-09 19:05:36.103947463 +0000 UTC m=+2473.265323319" Mar 09 19:05:36 crc kubenswrapper[4821]: I0309 19:05:36.387159 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:37 crc kubenswrapper[4821]: I0309 19:05:37.080770 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"77e4d716-413c-4631-a5ea-e459707d78a3","Type":"ContainerStarted","Data":"acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95"} Mar 09 19:05:37 crc kubenswrapper[4821]: I0309 19:05:37.081142 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"77e4d716-413c-4631-a5ea-e459707d78a3","Type":"ContainerStarted","Data":"936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e"} Mar 09 19:05:37 crc kubenswrapper[4821]: I0309 19:05:37.081163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"77e4d716-413c-4631-a5ea-e459707d78a3","Type":"ContainerStarted","Data":"0cdef2676bf4ebdb2c9ff4770fd0c62792d75fadf112f5b5620bc3bfa2dc85ed"} Mar 09 19:05:37 crc kubenswrapper[4821]: I0309 19:05:37.111248 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.111218624 podStartE2EDuration="3.111218624s" podCreationTimestamp="2026-03-09 19:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:05:37.102009904 +0000 UTC m=+2474.263385760" watchObservedRunningTime="2026-03-09 19:05:37.111218624 +0000 UTC m=+2474.272594550" Mar 09 19:05:38 crc kubenswrapper[4821]: I0309 19:05:38.089285 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:39 crc kubenswrapper[4821]: I0309 19:05:39.972347 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:40 crc kubenswrapper[4821]: I0309 19:05:40.316866 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:40 crc kubenswrapper[4821]: I0309 19:05:40.888216 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:44 crc kubenswrapper[4821]: I0309 19:05:44.972910 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.007492 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.072954 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.102720 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.196484 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.217483 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.219312 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.887819 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:45 crc kubenswrapper[4821]: I0309 19:05:45.897790 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:46 crc kubenswrapper[4821]: I0309 19:05:46.231964 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:05:48 crc kubenswrapper[4821]: I0309 19:05:48.297780 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:48 crc kubenswrapper[4821]: I0309 19:05:48.298982 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-central-agent" containerID="cri-o://e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" gracePeriod=30 Mar 09 19:05:48 crc kubenswrapper[4821]: I0309 19:05:48.299016 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="proxy-httpd" containerID="cri-o://c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" gracePeriod=30 Mar 09 19:05:48 crc kubenswrapper[4821]: I0309 19:05:48.299048 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="sg-core" containerID="cri-o://d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" gracePeriod=30 Mar 09 19:05:48 crc kubenswrapper[4821]: I0309 19:05:48.299063 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-notification-agent" containerID="cri-o://1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" gracePeriod=30 Mar 09 19:05:48 crc kubenswrapper[4821]: I0309 19:05:48.318562 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.218:3000/\": EOF" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.124107 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241484 4821 generic.go:334] "Generic (PLEG): container finished" podID="612aac74-39c0-4091-ac1d-b47512ee620a" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" exitCode=0 Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241511 4821 generic.go:334] "Generic (PLEG): container finished" podID="612aac74-39c0-4091-ac1d-b47512ee620a" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" exitCode=2 Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241519 4821 generic.go:334] "Generic (PLEG): container finished" podID="612aac74-39c0-4091-ac1d-b47512ee620a" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" exitCode=0 Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241526 4821 generic.go:334] "Generic (PLEG): container finished" podID="612aac74-39c0-4091-ac1d-b47512ee620a" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" exitCode=0 Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241547 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241544 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerDied","Data":"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1"} Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241659 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerDied","Data":"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc"} Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241675 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerDied","Data":"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98"} Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241687 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerDied","Data":"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9"} Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241701 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"612aac74-39c0-4091-ac1d-b47512ee620a","Type":"ContainerDied","Data":"923e4e3de1f33056d77ee36f9a3a6beeb953a6320dfa8676586546f07df68646"} Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.241720 4821 scope.go:117] "RemoveContainer" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254218 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-ceilometer-tls-certs\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254261 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-combined-ca-bundle\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254288 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-config-data\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254354 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-log-httpd\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254406 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-sg-core-conf-yaml\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254427 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-scripts\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkwns\" (UniqueName: \"kubernetes.io/projected/612aac74-39c0-4091-ac1d-b47512ee620a-kube-api-access-xkwns\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.254520 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-run-httpd\") pod \"612aac74-39c0-4091-ac1d-b47512ee620a\" (UID: \"612aac74-39c0-4091-ac1d-b47512ee620a\") " Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.255347 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.256446 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.260065 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-scripts" (OuterVolumeSpecName: "scripts") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.260495 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612aac74-39c0-4091-ac1d-b47512ee620a-kube-api-access-xkwns" (OuterVolumeSpecName: "kube-api-access-xkwns") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "kube-api-access-xkwns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.260717 4821 scope.go:117] "RemoveContainer" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.277778 4821 scope.go:117] "RemoveContainer" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.283168 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.306888 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.319459 4821 scope.go:117] "RemoveContainer" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.325813 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.341149 4821 scope.go:117] "RemoveContainer" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.341652 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": container with ID starting with c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1 not found: ID does not exist" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.341693 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1"} err="failed to get container status \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": rpc error: code = NotFound desc = could not find container \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": container with ID starting with c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.341719 4821 scope.go:117] "RemoveContainer" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.342096 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": container with ID starting with d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc not found: ID does not exist" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.342114 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc"} err="failed to get container status \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": rpc error: code = NotFound desc = could not find container \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": container with ID starting with d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.342126 4821 scope.go:117] "RemoveContainer" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.342509 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": container with ID starting with 1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98 not found: ID does not exist" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.342567 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98"} err="failed to get container status \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": rpc error: code = NotFound desc = could not find container \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": container with ID starting with 1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.342602 4821 scope.go:117] "RemoveContainer" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.342958 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": container with ID starting with e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9 not found: ID does not exist" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.342984 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9"} err="failed to get container status \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": rpc error: code = NotFound desc = could not find container \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": container with ID starting with e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.342998 4821 scope.go:117] "RemoveContainer" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343009 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-config-data" (OuterVolumeSpecName: "config-data") pod "612aac74-39c0-4091-ac1d-b47512ee620a" (UID: "612aac74-39c0-4091-ac1d-b47512ee620a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343215 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1"} err="failed to get container status \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": rpc error: code = NotFound desc = could not find container \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": container with ID starting with c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343242 4821 scope.go:117] "RemoveContainer" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343548 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc"} err="failed to get container status \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": rpc error: code = NotFound desc = could not find container \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": container with ID starting with d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343567 4821 scope.go:117] "RemoveContainer" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343783 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98"} err="failed to get container status \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": rpc error: code = NotFound desc = could not find container \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": container with ID starting with 1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.343811 4821 scope.go:117] "RemoveContainer" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344012 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9"} err="failed to get container status \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": rpc error: code = NotFound desc = could not find container \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": container with ID starting with e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344032 4821 scope.go:117] "RemoveContainer" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344232 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1"} err="failed to get container status \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": rpc error: code = NotFound desc = could not find container \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": container with ID starting with c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344256 4821 scope.go:117] "RemoveContainer" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344554 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc"} err="failed to get container status \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": rpc error: code = NotFound desc = could not find container \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": container with ID starting with d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344576 4821 scope.go:117] "RemoveContainer" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344907 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98"} err="failed to get container status \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": rpc error: code = NotFound desc = could not find container \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": container with ID starting with 1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.344954 4821 scope.go:117] "RemoveContainer" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.345235 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9"} err="failed to get container status \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": rpc error: code = NotFound desc = could not find container \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": container with ID starting with e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.345254 4821 scope.go:117] "RemoveContainer" containerID="c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.345574 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1"} err="failed to get container status \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": rpc error: code = NotFound desc = could not find container \"c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1\": container with ID starting with c9701527a6201ef89259001fa4110a6e7ef7c0cdf95b490f402af39b49ebbfa1 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.345603 4821 scope.go:117] "RemoveContainer" containerID="d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.345905 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc"} err="failed to get container status \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": rpc error: code = NotFound desc = could not find container \"d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc\": container with ID starting with d8adf2e9b5e3a1901323c18b4a7b6c1bbbe55833072f80d7b506eaf64fc778bc not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.345924 4821 scope.go:117] "RemoveContainer" containerID="1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.346142 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98"} err="failed to get container status \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": rpc error: code = NotFound desc = could not find container \"1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98\": container with ID starting with 1389caabd23a80afc9a90b603aac8713f7fa0d7e8ce18e220fcaa1d0fb69ed98 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.346164 4821 scope.go:117] "RemoveContainer" containerID="e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.346383 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9"} err="failed to get container status \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": rpc error: code = NotFound desc = could not find container \"e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9\": container with ID starting with e6e9d1b118a2c2531ee1c6b4365445e738db844ad169be40014f539f08a827e9 not found: ID does not exist" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356348 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356376 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356387 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkwns\" (UniqueName: \"kubernetes.io/projected/612aac74-39c0-4091-ac1d-b47512ee620a-kube-api-access-xkwns\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356397 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356404 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356414 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356421 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612aac74-39c0-4091-ac1d-b47512ee620a-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.356430 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/612aac74-39c0-4091-ac1d-b47512ee620a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.590159 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.612868 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647083 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.647491 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-notification-agent" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647513 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-notification-agent" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.647535 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="proxy-httpd" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647542 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="proxy-httpd" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.647559 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-central-agent" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647566 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-central-agent" Mar 09 19:05:49 crc kubenswrapper[4821]: E0309 19:05:49.647581 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="sg-core" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647588 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="sg-core" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647755 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="proxy-httpd" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647782 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-central-agent" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647799 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="sg-core" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.647810 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" containerName="ceilometer-notification-agent" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.649517 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.651218 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.651493 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.651672 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.661436 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763453 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763549 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-scripts\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763573 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763612 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb8t7\" (UniqueName: \"kubernetes.io/projected/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-kube-api-access-kb8t7\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763629 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763651 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763802 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-config-data\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.763852 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865162 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb8t7\" (UniqueName: \"kubernetes.io/projected/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-kube-api-access-kb8t7\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865448 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865597 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865736 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-config-data\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865839 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865953 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.865965 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.866214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-scripts\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.866263 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.866694 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.869770 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-scripts\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.869814 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.870077 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-config-data\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.874451 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.885143 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.891737 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb8t7\" (UniqueName: \"kubernetes.io/projected/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-kube-api-access-kb8t7\") pod \"ceilometer-0\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:49 crc kubenswrapper[4821]: I0309 19:05:49.972147 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:50 crc kubenswrapper[4821]: I0309 19:05:50.463441 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:05:50 crc kubenswrapper[4821]: W0309 19:05:50.468027 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfc6d9b1_c4ed_4ec3_b52a_d648e0cded2b.slice/crio-e89260e4dadc47eb173a48ab39fda076fd1cfd54297e42ea8e43f989ca1d2265 WatchSource:0}: Error finding container e89260e4dadc47eb173a48ab39fda076fd1cfd54297e42ea8e43f989ca1d2265: Status 404 returned error can't find the container with id e89260e4dadc47eb173a48ab39fda076fd1cfd54297e42ea8e43f989ca1d2265 Mar 09 19:05:51 crc kubenswrapper[4821]: I0309 19:05:51.261485 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerStarted","Data":"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6"} Mar 09 19:05:51 crc kubenswrapper[4821]: I0309 19:05:51.261705 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerStarted","Data":"e89260e4dadc47eb173a48ab39fda076fd1cfd54297e42ea8e43f989ca1d2265"} Mar 09 19:05:51 crc kubenswrapper[4821]: I0309 19:05:51.563629 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612aac74-39c0-4091-ac1d-b47512ee620a" path="/var/lib/kubelet/pods/612aac74-39c0-4091-ac1d-b47512ee620a/volumes" Mar 09 19:05:52 crc kubenswrapper[4821]: I0309 19:05:52.272837 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerStarted","Data":"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0"} Mar 09 19:05:53 crc kubenswrapper[4821]: I0309 19:05:53.297992 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerStarted","Data":"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758"} Mar 09 19:05:55 crc kubenswrapper[4821]: I0309 19:05:55.315225 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerStarted","Data":"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b"} Mar 09 19:05:55 crc kubenswrapper[4821]: I0309 19:05:55.315838 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:05:55 crc kubenswrapper[4821]: I0309 19:05:55.345798 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.054107172 podStartE2EDuration="6.345772843s" podCreationTimestamp="2026-03-09 19:05:49 +0000 UTC" firstStartedPulling="2026-03-09 19:05:50.471013608 +0000 UTC m=+2487.632389464" lastFinishedPulling="2026-03-09 19:05:54.762679279 +0000 UTC m=+2491.924055135" observedRunningTime="2026-03-09 19:05:55.341046195 +0000 UTC m=+2492.502422061" watchObservedRunningTime="2026-03-09 19:05:55.345772843 +0000 UTC m=+2492.507148699" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.605656 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-44kqp"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.615067 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-44kqp"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.664085 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherd25c-account-delete-v8tjf"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.665068 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.682954 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.683238 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="94909197-ed08-41db-ac95-ad9bfcd5df75" containerName="watcher-decision-engine" containerID="cri-o://ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" gracePeriod=30 Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.711451 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherd25c-account-delete-v8tjf"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.767129 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.767496 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-api" containerID="cri-o://acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95" gracePeriod=30 Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.767433 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-kuttl-api-log" containerID="cri-o://936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e" gracePeriod=30 Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.786946 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.788979 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="258f22b8-2e89-409e-bd55-5e94f0c5d861" containerName="watcher-applier" containerID="cri-o://9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" gracePeriod=30 Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.819118 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a810e4-8fad-4ddf-a008-27bdbca3459d-operator-scripts\") pod \"watcherd25c-account-delete-v8tjf\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.819202 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvhh\" (UniqueName: \"kubernetes.io/projected/c6a810e4-8fad-4ddf-a008-27bdbca3459d-kube-api-access-ztvhh\") pod \"watcherd25c-account-delete-v8tjf\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.920476 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a810e4-8fad-4ddf-a008-27bdbca3459d-operator-scripts\") pod \"watcherd25c-account-delete-v8tjf\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.920557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztvhh\" (UniqueName: \"kubernetes.io/projected/c6a810e4-8fad-4ddf-a008-27bdbca3459d-kube-api-access-ztvhh\") pod \"watcherd25c-account-delete-v8tjf\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.921206 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a810e4-8fad-4ddf-a008-27bdbca3459d-operator-scripts\") pod \"watcherd25c-account-delete-v8tjf\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.955166 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztvhh\" (UniqueName: \"kubernetes.io/projected/c6a810e4-8fad-4ddf-a008-27bdbca3459d-kube-api-access-ztvhh\") pod \"watcherd25c-account-delete-v8tjf\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:58 crc kubenswrapper[4821]: I0309 19:05:58.979837 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:05:59 crc kubenswrapper[4821]: I0309 19:05:59.348588 4821 generic.go:334] "Generic (PLEG): container finished" podID="77e4d716-413c-4631-a5ea-e459707d78a3" containerID="936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e" exitCode=143 Mar 09 19:05:59 crc kubenswrapper[4821]: I0309 19:05:59.348667 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"77e4d716-413c-4631-a5ea-e459707d78a3","Type":"ContainerDied","Data":"936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e"} Mar 09 19:05:59 crc kubenswrapper[4821]: I0309 19:05:59.493715 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherd25c-account-delete-v8tjf"] Mar 09 19:05:59 crc kubenswrapper[4821]: I0309 19:05:59.561474 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e960af1-5e85-4364-a986-be14476acab4" path="/var/lib/kubelet/pods/6e960af1-5e85-4364-a986-be14476acab4/volumes" Mar 09 19:05:59 crc kubenswrapper[4821]: I0309 19:05:59.914007 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:05:59 crc kubenswrapper[4821]: I0309 19:05:59.914073 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:05:59 crc kubenswrapper[4821]: E0309 19:05:59.974238 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:05:59 crc kubenswrapper[4821]: E0309 19:05:59.976303 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:05:59 crc kubenswrapper[4821]: E0309 19:05:59.977581 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:05:59 crc kubenswrapper[4821]: E0309 19:05:59.977639 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="258f22b8-2e89-409e-bd55-5e94f0c5d861" containerName="watcher-applier" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.050786 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.135755 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551386-fqtgc"] Mar 09 19:06:00 crc kubenswrapper[4821]: E0309 19:06:00.136166 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-api" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.136191 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-api" Mar 09 19:06:00 crc kubenswrapper[4821]: E0309 19:06:00.136212 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-kuttl-api-log" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.136221 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-kuttl-api-log" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.137059 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-kuttl-api-log" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.137096 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" containerName="watcher-api" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.137877 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.138664 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e4d716-413c-4631-a5ea-e459707d78a3-logs\") pod \"77e4d716-413c-4631-a5ea-e459707d78a3\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.138760 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j46pd\" (UniqueName: \"kubernetes.io/projected/77e4d716-413c-4631-a5ea-e459707d78a3-kube-api-access-j46pd\") pod \"77e4d716-413c-4631-a5ea-e459707d78a3\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.138853 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-combined-ca-bundle\") pod \"77e4d716-413c-4631-a5ea-e459707d78a3\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.138924 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-custom-prometheus-ca\") pod \"77e4d716-413c-4631-a5ea-e459707d78a3\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.139352 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77e4d716-413c-4631-a5ea-e459707d78a3-logs" (OuterVolumeSpecName: "logs") pod "77e4d716-413c-4631-a5ea-e459707d78a3" (UID: "77e4d716-413c-4631-a5ea-e459707d78a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.139686 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-cert-memcached-mtls\") pod \"77e4d716-413c-4631-a5ea-e459707d78a3\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.139801 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-config-data\") pod \"77e4d716-413c-4631-a5ea-e459707d78a3\" (UID: \"77e4d716-413c-4631-a5ea-e459707d78a3\") " Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.140255 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e4d716-413c-4631-a5ea-e459707d78a3-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.140815 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.141014 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.141182 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.146782 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e4d716-413c-4631-a5ea-e459707d78a3-kube-api-access-j46pd" (OuterVolumeSpecName: "kube-api-access-j46pd") pod "77e4d716-413c-4631-a5ea-e459707d78a3" (UID: "77e4d716-413c-4631-a5ea-e459707d78a3"). InnerVolumeSpecName "kube-api-access-j46pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.186234 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77e4d716-413c-4631-a5ea-e459707d78a3" (UID: "77e4d716-413c-4631-a5ea-e459707d78a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.198092 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551386-fqtgc"] Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.211594 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-config-data" (OuterVolumeSpecName: "config-data") pod "77e4d716-413c-4631-a5ea-e459707d78a3" (UID: "77e4d716-413c-4631-a5ea-e459707d78a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.221618 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "77e4d716-413c-4631-a5ea-e459707d78a3" (UID: "77e4d716-413c-4631-a5ea-e459707d78a3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.243374 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdcs\" (UniqueName: \"kubernetes.io/projected/116854a1-ac31-4634-8373-53ce3889d5e0-kube-api-access-nbdcs\") pod \"auto-csr-approver-29551386-fqtgc\" (UID: \"116854a1-ac31-4634-8373-53ce3889d5e0\") " pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.243594 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.243607 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j46pd\" (UniqueName: \"kubernetes.io/projected/77e4d716-413c-4631-a5ea-e459707d78a3-kube-api-access-j46pd\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.243619 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.243629 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.249373 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "77e4d716-413c-4631-a5ea-e459707d78a3" (UID: "77e4d716-413c-4631-a5ea-e459707d78a3"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.345534 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdcs\" (UniqueName: \"kubernetes.io/projected/116854a1-ac31-4634-8373-53ce3889d5e0-kube-api-access-nbdcs\") pod \"auto-csr-approver-29551386-fqtgc\" (UID: \"116854a1-ac31-4634-8373-53ce3889d5e0\") " pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.345637 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77e4d716-413c-4631-a5ea-e459707d78a3-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.359648 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdcs\" (UniqueName: \"kubernetes.io/projected/116854a1-ac31-4634-8373-53ce3889d5e0-kube-api-access-nbdcs\") pod \"auto-csr-approver-29551386-fqtgc\" (UID: \"116854a1-ac31-4634-8373-53ce3889d5e0\") " pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.360492 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" event={"ID":"c6a810e4-8fad-4ddf-a008-27bdbca3459d","Type":"ContainerDied","Data":"8d262c90630cb426e4e4b2bcc086635e3722ff8b122600c4ddcc380828663561"} Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.360811 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6a810e4-8fad-4ddf-a008-27bdbca3459d" containerID="8d262c90630cb426e4e4b2bcc086635e3722ff8b122600c4ddcc380828663561" exitCode=0 Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.360899 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" event={"ID":"c6a810e4-8fad-4ddf-a008-27bdbca3459d","Type":"ContainerStarted","Data":"f672b2fc84779fa7d60bff1530d5a98822156f265de85ae587dd616a0fc0df3e"} Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.363304 4821 generic.go:334] "Generic (PLEG): container finished" podID="77e4d716-413c-4631-a5ea-e459707d78a3" containerID="acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95" exitCode=0 Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.363342 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"77e4d716-413c-4631-a5ea-e459707d78a3","Type":"ContainerDied","Data":"acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95"} Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.363394 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"77e4d716-413c-4631-a5ea-e459707d78a3","Type":"ContainerDied","Data":"0cdef2676bf4ebdb2c9ff4770fd0c62792d75fadf112f5b5620bc3bfa2dc85ed"} Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.363389 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.363409 4821 scope.go:117] "RemoveContainer" containerID="acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.403525 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.438217 4821 scope.go:117] "RemoveContainer" containerID="936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.440267 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.465997 4821 scope.go:117] "RemoveContainer" containerID="acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95" Mar 09 19:06:00 crc kubenswrapper[4821]: E0309 19:06:00.466430 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95\": container with ID starting with acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95 not found: ID does not exist" containerID="acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.466465 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95"} err="failed to get container status \"acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95\": rpc error: code = NotFound desc = could not find container \"acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95\": container with ID starting with acf7883498e39ef38ba7439cf43f1755d1fa2e416d9ee6aa40936ebd70639e95 not found: ID does not exist" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.466488 4821 scope.go:117] "RemoveContainer" containerID="936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e" Mar 09 19:06:00 crc kubenswrapper[4821]: E0309 19:06:00.466732 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e\": container with ID starting with 936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e not found: ID does not exist" containerID="936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.466749 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e"} err="failed to get container status \"936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e\": rpc error: code = NotFound desc = could not find container \"936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e\": container with ID starting with 936903f98a89720a3f45a760012973b3c45bc6c173106dc82de5898ebfa51c5e not found: ID does not exist" Mar 09 19:06:00 crc kubenswrapper[4821]: I0309 19:06:00.538034 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.007004 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551386-fqtgc"] Mar 09 19:06:01 crc kubenswrapper[4821]: W0309 19:06:01.011936 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod116854a1_ac31_4634_8373_53ce3889d5e0.slice/crio-d9414b2081caf9153d98a7b47418b29f002701ab703f0c85093af6aae0a86707 WatchSource:0}: Error finding container d9414b2081caf9153d98a7b47418b29f002701ab703f0c85093af6aae0a86707: Status 404 returned error can't find the container with id d9414b2081caf9153d98a7b47418b29f002701ab703f0c85093af6aae0a86707 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.236955 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.237511 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-central-agent" containerID="cri-o://12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6" gracePeriod=30 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.237889 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="proxy-httpd" containerID="cri-o://17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b" gracePeriod=30 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.237870 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="sg-core" containerID="cri-o://b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758" gracePeriod=30 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.238020 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-notification-agent" containerID="cri-o://02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0" gracePeriod=30 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.372445 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" event={"ID":"116854a1-ac31-4634-8373-53ce3889d5e0","Type":"ContainerStarted","Data":"d9414b2081caf9153d98a7b47418b29f002701ab703f0c85093af6aae0a86707"} Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.374393 4821 generic.go:334] "Generic (PLEG): container finished" podID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerID="17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b" exitCode=0 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.374416 4821 generic.go:334] "Generic (PLEG): container finished" podID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerID="b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758" exitCode=2 Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.374440 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerDied","Data":"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b"} Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.374461 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerDied","Data":"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758"} Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.564202 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e4d716-413c-4631-a5ea-e459707d78a3" path="/var/lib/kubelet/pods/77e4d716-413c-4631-a5ea-e459707d78a3/volumes" Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.742820 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.874656 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztvhh\" (UniqueName: \"kubernetes.io/projected/c6a810e4-8fad-4ddf-a008-27bdbca3459d-kube-api-access-ztvhh\") pod \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.874721 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a810e4-8fad-4ddf-a008-27bdbca3459d-operator-scripts\") pod \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\" (UID: \"c6a810e4-8fad-4ddf-a008-27bdbca3459d\") " Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.875602 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6a810e4-8fad-4ddf-a008-27bdbca3459d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6a810e4-8fad-4ddf-a008-27bdbca3459d" (UID: "c6a810e4-8fad-4ddf-a008-27bdbca3459d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.883545 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a810e4-8fad-4ddf-a008-27bdbca3459d-kube-api-access-ztvhh" (OuterVolumeSpecName: "kube-api-access-ztvhh") pod "c6a810e4-8fad-4ddf-a008-27bdbca3459d" (UID: "c6a810e4-8fad-4ddf-a008-27bdbca3459d"). InnerVolumeSpecName "kube-api-access-ztvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.976893 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a810e4-8fad-4ddf-a008-27bdbca3459d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:01 crc kubenswrapper[4821]: I0309 19:06:01.977242 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztvhh\" (UniqueName: \"kubernetes.io/projected/c6a810e4-8fad-4ddf-a008-27bdbca3459d-kube-api-access-ztvhh\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.230652 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.284613 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-config-data\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.284657 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-run-httpd\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.284764 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-scripts\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.284797 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-ceilometer-tls-certs\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.284885 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb8t7\" (UniqueName: \"kubernetes.io/projected/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-kube-api-access-kb8t7\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.285163 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-log-httpd\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.285196 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-sg-core-conf-yaml\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.285219 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-combined-ca-bundle\") pod \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\" (UID: \"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.285377 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.285758 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.286080 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.289909 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-kube-api-access-kb8t7" (OuterVolumeSpecName: "kube-api-access-kb8t7") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "kube-api-access-kb8t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.292805 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-scripts" (OuterVolumeSpecName: "scripts") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.319419 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.343921 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.351089 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.387636 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.389531 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.389563 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb8t7\" (UniqueName: \"kubernetes.io/projected/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-kube-api-access-kb8t7\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.389573 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.389582 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.402292 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.402305 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherd25c-account-delete-v8tjf" event={"ID":"c6a810e4-8fad-4ddf-a008-27bdbca3459d","Type":"ContainerDied","Data":"f672b2fc84779fa7d60bff1530d5a98822156f265de85ae587dd616a0fc0df3e"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.402380 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f672b2fc84779fa7d60bff1530d5a98822156f265de85ae587dd616a0fc0df3e" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.411339 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" event={"ID":"116854a1-ac31-4634-8373-53ce3889d5e0","Type":"ContainerStarted","Data":"2615dd429b64c05cd27c554829b69270aad5e21e5cd8e21293e1f3f49b91425a"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.417941 4821 generic.go:334] "Generic (PLEG): container finished" podID="258f22b8-2e89-409e-bd55-5e94f0c5d861" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" exitCode=0 Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.418036 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"258f22b8-2e89-409e-bd55-5e94f0c5d861","Type":"ContainerDied","Data":"9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.418065 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"258f22b8-2e89-409e-bd55-5e94f0c5d861","Type":"ContainerDied","Data":"8bd6d00afa86434a4c760597b6a893816b59f999bff65bd9e961a07b726ca810"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.418088 4821 scope.go:117] "RemoveContainer" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.418204 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.422231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.425939 4821 generic.go:334] "Generic (PLEG): container finished" podID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerID="02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0" exitCode=0 Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.426059 4821 generic.go:334] "Generic (PLEG): container finished" podID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerID="12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6" exitCode=0 Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.426057 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerDied","Data":"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.426539 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerDied","Data":"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.426612 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b","Type":"ContainerDied","Data":"e89260e4dadc47eb173a48ab39fda076fd1cfd54297e42ea8e43f989ca1d2265"} Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.426035 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.452783 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-config-data" (OuterVolumeSpecName: "config-data") pod "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" (UID: "bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.454096 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6a810e4_8fad_4ddf_a008_27bdbca3459d.slice/crio-f672b2fc84779fa7d60bff1530d5a98822156f265de85ae587dd616a0fc0df3e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6a810e4_8fad_4ddf_a008_27bdbca3459d.slice\": RecentStats: unable to find data in memory cache]" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.455288 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" podStartSLOduration=1.512592318 podStartE2EDuration="2.455253988s" podCreationTimestamp="2026-03-09 19:06:00 +0000 UTC" firstStartedPulling="2026-03-09 19:06:01.014516718 +0000 UTC m=+2498.175892574" lastFinishedPulling="2026-03-09 19:06:01.957178388 +0000 UTC m=+2499.118554244" observedRunningTime="2026-03-09 19:06:02.444869336 +0000 UTC m=+2499.606245192" watchObservedRunningTime="2026-03-09 19:06:02.455253988 +0000 UTC m=+2499.616629854" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.480379 4821 scope.go:117] "RemoveContainer" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.480809 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a\": container with ID starting with 9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a not found: ID does not exist" containerID="9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.480837 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a"} err="failed to get container status \"9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a\": rpc error: code = NotFound desc = could not find container \"9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a\": container with ID starting with 9250b66fc2a4ed1f0091c5a7eee179fb61bfc090f9a38e35d766d8428d7c1f3a not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.480859 4821 scope.go:117] "RemoveContainer" containerID="17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.492686 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2jjt\" (UniqueName: \"kubernetes.io/projected/258f22b8-2e89-409e-bd55-5e94f0c5d861-kube-api-access-t2jjt\") pod \"258f22b8-2e89-409e-bd55-5e94f0c5d861\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.492737 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-combined-ca-bundle\") pod \"258f22b8-2e89-409e-bd55-5e94f0c5d861\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.492816 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-config-data\") pod \"258f22b8-2e89-409e-bd55-5e94f0c5d861\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.492846 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/258f22b8-2e89-409e-bd55-5e94f0c5d861-logs\") pod \"258f22b8-2e89-409e-bd55-5e94f0c5d861\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.492895 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-cert-memcached-mtls\") pod \"258f22b8-2e89-409e-bd55-5e94f0c5d861\" (UID: \"258f22b8-2e89-409e-bd55-5e94f0c5d861\") " Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.493344 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.493361 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.493656 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258f22b8-2e89-409e-bd55-5e94f0c5d861-logs" (OuterVolumeSpecName: "logs") pod "258f22b8-2e89-409e-bd55-5e94f0c5d861" (UID: "258f22b8-2e89-409e-bd55-5e94f0c5d861"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.510122 4821 scope.go:117] "RemoveContainer" containerID="b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.514277 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258f22b8-2e89-409e-bd55-5e94f0c5d861-kube-api-access-t2jjt" (OuterVolumeSpecName: "kube-api-access-t2jjt") pod "258f22b8-2e89-409e-bd55-5e94f0c5d861" (UID: "258f22b8-2e89-409e-bd55-5e94f0c5d861"). InnerVolumeSpecName "kube-api-access-t2jjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.531774 4821 scope.go:117] "RemoveContainer" containerID="02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.540723 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "258f22b8-2e89-409e-bd55-5e94f0c5d861" (UID: "258f22b8-2e89-409e-bd55-5e94f0c5d861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.550586 4821 scope.go:117] "RemoveContainer" containerID="12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.569277 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-config-data" (OuterVolumeSpecName: "config-data") pod "258f22b8-2e89-409e-bd55-5e94f0c5d861" (UID: "258f22b8-2e89-409e-bd55-5e94f0c5d861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.574256 4821 scope.go:117] "RemoveContainer" containerID="17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.575001 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b\": container with ID starting with 17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b not found: ID does not exist" containerID="17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.575042 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b"} err="failed to get container status \"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b\": rpc error: code = NotFound desc = could not find container \"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b\": container with ID starting with 17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.575100 4821 scope.go:117] "RemoveContainer" containerID="b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.575631 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758\": container with ID starting with b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758 not found: ID does not exist" containerID="b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.575677 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758"} err="failed to get container status \"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758\": rpc error: code = NotFound desc = could not find container \"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758\": container with ID starting with b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758 not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.575713 4821 scope.go:117] "RemoveContainer" containerID="02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.576627 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0\": container with ID starting with 02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0 not found: ID does not exist" containerID="02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.576672 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0"} err="failed to get container status \"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0\": rpc error: code = NotFound desc = could not find container \"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0\": container with ID starting with 02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0 not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.576687 4821 scope.go:117] "RemoveContainer" containerID="12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.577163 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6\": container with ID starting with 12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6 not found: ID does not exist" containerID="12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.577185 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6"} err="failed to get container status \"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6\": rpc error: code = NotFound desc = could not find container \"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6\": container with ID starting with 12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6 not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.577197 4821 scope.go:117] "RemoveContainer" containerID="17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.578446 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b"} err="failed to get container status \"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b\": rpc error: code = NotFound desc = could not find container \"17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b\": container with ID starting with 17e6d5236196059b9d9f8cdd64569a0b2744eba74df612f3880c6a2626ccbf8b not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.578475 4821 scope.go:117] "RemoveContainer" containerID="b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.578819 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758"} err="failed to get container status \"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758\": rpc error: code = NotFound desc = could not find container \"b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758\": container with ID starting with b3f4f963fc567b189946e502919bba767c6119e24b7db0495730336a624cd758 not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.578856 4821 scope.go:117] "RemoveContainer" containerID="02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.579302 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0"} err="failed to get container status \"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0\": rpc error: code = NotFound desc = could not find container \"02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0\": container with ID starting with 02810abaa6b83e228bb271a4b549849942cede88a19ab1e3a2e99d99e470c0a0 not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.579342 4821 scope.go:117] "RemoveContainer" containerID="12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.579597 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6"} err="failed to get container status \"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6\": rpc error: code = NotFound desc = could not find container \"12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6\": container with ID starting with 12b2e9566b7dcf2eece17011294cb6a193c13254c05b52dbc304845d68d17bb6 not found: ID does not exist" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.580714 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "258f22b8-2e89-409e-bd55-5e94f0c5d861" (UID: "258f22b8-2e89-409e-bd55-5e94f0c5d861"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.594971 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2jjt\" (UniqueName: \"kubernetes.io/projected/258f22b8-2e89-409e-bd55-5e94f0c5d861-kube-api-access-t2jjt\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.595002 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.595011 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.595021 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/258f22b8-2e89-409e-bd55-5e94f0c5d861-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.595030 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/258f22b8-2e89-409e-bd55-5e94f0c5d861-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.758082 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.768091 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.777891 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.786806 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.808979 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.810208 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-central-agent" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810248 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-central-agent" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.810286 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258f22b8-2e89-409e-bd55-5e94f0c5d861" containerName="watcher-applier" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810295 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="258f22b8-2e89-409e-bd55-5e94f0c5d861" containerName="watcher-applier" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.810304 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-notification-agent" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810310 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-notification-agent" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.810345 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="sg-core" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810352 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="sg-core" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.810379 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a810e4-8fad-4ddf-a008-27bdbca3459d" containerName="mariadb-account-delete" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810386 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a810e4-8fad-4ddf-a008-27bdbca3459d" containerName="mariadb-account-delete" Mar 09 19:06:02 crc kubenswrapper[4821]: E0309 19:06:02.810400 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="proxy-httpd" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810406 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="proxy-httpd" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810706 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="sg-core" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810734 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-central-agent" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810747 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="ceilometer-notification-agent" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810754 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" containerName="proxy-httpd" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810768 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="258f22b8-2e89-409e-bd55-5e94f0c5d861" containerName="watcher-applier" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.810786 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a810e4-8fad-4ddf-a008-27bdbca3459d" containerName="mariadb-account-delete" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.815899 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.821005 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.824132 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.830755 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.830897 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.901246 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-config-data\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.901606 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.901632 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-scripts\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.901663 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkkt9\" (UniqueName: \"kubernetes.io/projected/0187ac96-bd8c-4260-86be-1d2442b47dfa-kube-api-access-lkkt9\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.901680 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.901817 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-run-httpd\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.902089 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:02 crc kubenswrapper[4821]: I0309 19:06:02.902442 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-log-httpd\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003786 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-log-httpd\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003817 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-config-data\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003848 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003874 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-scripts\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003914 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003954 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkkt9\" (UniqueName: \"kubernetes.io/projected/0187ac96-bd8c-4260-86be-1d2442b47dfa-kube-api-access-lkkt9\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.003991 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-run-httpd\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.004407 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-run-httpd\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.004404 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-log-httpd\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.007588 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.008335 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-config-data\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.009433 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.010219 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.011927 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-scripts\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.020022 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkkt9\" (UniqueName: \"kubernetes.io/projected/0187ac96-bd8c-4260-86be-1d2442b47dfa-kube-api-access-lkkt9\") pod \"ceilometer-0\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.136835 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.437609 4821 generic.go:334] "Generic (PLEG): container finished" podID="116854a1-ac31-4634-8373-53ce3889d5e0" containerID="2615dd429b64c05cd27c554829b69270aad5e21e5cd8e21293e1f3f49b91425a" exitCode=0 Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.437932 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" event={"ID":"116854a1-ac31-4634-8373-53ce3889d5e0","Type":"ContainerDied","Data":"2615dd429b64c05cd27c554829b69270aad5e21e5cd8e21293e1f3f49b91425a"} Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.574239 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258f22b8-2e89-409e-bd55-5e94f0c5d861" path="/var/lib/kubelet/pods/258f22b8-2e89-409e-bd55-5e94f0c5d861/volumes" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.574847 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b" path="/var/lib/kubelet/pods/bfc6d9b1-c4ed-4ec3-b52a-d648e0cded2b/volumes" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.575539 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.713645 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftmx8"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.722572 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-ftmx8"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.729264 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.739880 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherd25c-account-delete-v8tjf"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.746733 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-d25c-account-create-update-nbp6f"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.754969 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherd25c-account-delete-v8tjf"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.784617 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-btk74"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.785950 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.792713 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-btk74"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.897037 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-jdcqq"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.898376 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.900931 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.906272 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-jdcqq"] Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.920182 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68pmd\" (UniqueName: \"kubernetes.io/projected/2e107db2-5948-4a60-9745-59aae128e9b6-kube-api-access-68pmd\") pod \"watcher-db-create-btk74\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:03 crc kubenswrapper[4821]: I0309 19:06:03.920708 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e107db2-5948-4a60-9745-59aae128e9b6-operator-scripts\") pod \"watcher-db-create-btk74\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.021619 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4de929b-9a1a-4d74-a3c0-06bfea05f227-operator-scripts\") pod \"watcher-test-account-create-update-jdcqq\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.021675 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e107db2-5948-4a60-9745-59aae128e9b6-operator-scripts\") pod \"watcher-db-create-btk74\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.021790 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68pmd\" (UniqueName: \"kubernetes.io/projected/2e107db2-5948-4a60-9745-59aae128e9b6-kube-api-access-68pmd\") pod \"watcher-db-create-btk74\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.021895 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-668md\" (UniqueName: \"kubernetes.io/projected/b4de929b-9a1a-4d74-a3c0-06bfea05f227-kube-api-access-668md\") pod \"watcher-test-account-create-update-jdcqq\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.022335 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e107db2-5948-4a60-9745-59aae128e9b6-operator-scripts\") pod \"watcher-db-create-btk74\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.038002 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68pmd\" (UniqueName: \"kubernetes.io/projected/2e107db2-5948-4a60-9745-59aae128e9b6-kube-api-access-68pmd\") pod \"watcher-db-create-btk74\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.104985 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.123579 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-668md\" (UniqueName: \"kubernetes.io/projected/b4de929b-9a1a-4d74-a3c0-06bfea05f227-kube-api-access-668md\") pod \"watcher-test-account-create-update-jdcqq\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.123672 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4de929b-9a1a-4d74-a3c0-06bfea05f227-operator-scripts\") pod \"watcher-test-account-create-update-jdcqq\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.124449 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4de929b-9a1a-4d74-a3c0-06bfea05f227-operator-scripts\") pod \"watcher-test-account-create-update-jdcqq\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.143537 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-668md\" (UniqueName: \"kubernetes.io/projected/b4de929b-9a1a-4d74-a3c0-06bfea05f227-kube-api-access-668md\") pod \"watcher-test-account-create-update-jdcqq\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.238996 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.498660 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerStarted","Data":"a6cb2e632cc16aa4643e44670b555ba852c0af1798bcddee7b12d07fa6b68c31"} Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.499008 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerStarted","Data":"121ebf3e10a6b165bd9943083cfdb545847a475d4a2405c4e88846fba360336e"} Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.619155 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-btk74"] Mar 09 19:06:04 crc kubenswrapper[4821]: I0309 19:06:04.979212 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-jdcqq"] Mar 09 19:06:05 crc kubenswrapper[4821]: E0309 19:06:05.081707 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:06:05 crc kubenswrapper[4821]: E0309 19:06:05.083709 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:06:05 crc kubenswrapper[4821]: E0309 19:06:05.096013 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Mar 09 19:06:05 crc kubenswrapper[4821]: E0309 19:06:05.096089 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="94909197-ed08-41db-ac95-ad9bfcd5df75" containerName="watcher-decision-engine" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.283716 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.353690 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbdcs\" (UniqueName: \"kubernetes.io/projected/116854a1-ac31-4634-8373-53ce3889d5e0-kube-api-access-nbdcs\") pod \"116854a1-ac31-4634-8373-53ce3889d5e0\" (UID: \"116854a1-ac31-4634-8373-53ce3889d5e0\") " Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.360622 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116854a1-ac31-4634-8373-53ce3889d5e0-kube-api-access-nbdcs" (OuterVolumeSpecName: "kube-api-access-nbdcs") pod "116854a1-ac31-4634-8373-53ce3889d5e0" (UID: "116854a1-ac31-4634-8373-53ce3889d5e0"). InnerVolumeSpecName "kube-api-access-nbdcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.455931 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbdcs\" (UniqueName: \"kubernetes.io/projected/116854a1-ac31-4634-8373-53ce3889d5e0-kube-api-access-nbdcs\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.493673 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551380-nzrzx"] Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.512433 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551380-nzrzx"] Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.514583 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerStarted","Data":"3f67a15d99cc890e340b68b47a339cf4546d109309f9da7e2f5c903aa2bd08e2"} Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.516334 4821 generic.go:334] "Generic (PLEG): container finished" podID="2e107db2-5948-4a60-9745-59aae128e9b6" containerID="d987bcd0f29f1484414fd65e0f38e297988068006e4c624b3cb83f7e9a171d86" exitCode=0 Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.516416 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-btk74" event={"ID":"2e107db2-5948-4a60-9745-59aae128e9b6","Type":"ContainerDied","Data":"d987bcd0f29f1484414fd65e0f38e297988068006e4c624b3cb83f7e9a171d86"} Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.516458 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-btk74" event={"ID":"2e107db2-5948-4a60-9745-59aae128e9b6","Type":"ContainerStarted","Data":"9f58154eccbea0449a76262e89c1f21e71fec74011656a230a9414531319b848"} Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.518066 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" event={"ID":"116854a1-ac31-4634-8373-53ce3889d5e0","Type":"ContainerDied","Data":"d9414b2081caf9153d98a7b47418b29f002701ab703f0c85093af6aae0a86707"} Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.518189 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9414b2081caf9153d98a7b47418b29f002701ab703f0c85093af6aae0a86707" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.518294 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551386-fqtgc" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.520966 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" event={"ID":"b4de929b-9a1a-4d74-a3c0-06bfea05f227","Type":"ContainerStarted","Data":"1abbb20b71f729c7c4eec46791df4b61176122d3e5f7c58df1304e9632f170d8"} Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.521008 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" event={"ID":"b4de929b-9a1a-4d74-a3c0-06bfea05f227","Type":"ContainerStarted","Data":"759e5dfcc3ebb3ee4a90c536f9c047a4a78841be1f422ffb80becbea081686f3"} Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.564046 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4596b6a6-f94c-4ec2-825c-ff6acc262fe9" path="/var/lib/kubelet/pods/4596b6a6-f94c-4ec2-825c-ff6acc262fe9/volumes" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.564638 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa54031-dc56-46bc-b18d-63e0437e1ce3" path="/var/lib/kubelet/pods/bfa54031-dc56-46bc-b18d-63e0437e1ce3/volumes" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.565202 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a810e4-8fad-4ddf-a008-27bdbca3459d" path="/var/lib/kubelet/pods/c6a810e4-8fad-4ddf-a008-27bdbca3459d/volumes" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.565714 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb767d1d-2fb3-4a67-811e-c6646b50e3b2" path="/var/lib/kubelet/pods/fb767d1d-2fb3-4a67-811e-c6646b50e3b2/volumes" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.567554 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" podStartSLOduration=2.567535113 podStartE2EDuration="2.567535113s" podCreationTimestamp="2026-03-09 19:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:06:05.559535476 +0000 UTC m=+2502.720911342" watchObservedRunningTime="2026-03-09 19:06:05.567535113 +0000 UTC m=+2502.728910969" Mar 09 19:06:05 crc kubenswrapper[4821]: I0309 19:06:05.685206 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.535288 4821 generic.go:334] "Generic (PLEG): container finished" podID="b4de929b-9a1a-4d74-a3c0-06bfea05f227" containerID="1abbb20b71f729c7c4eec46791df4b61176122d3e5f7c58df1304e9632f170d8" exitCode=0 Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.535362 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" event={"ID":"b4de929b-9a1a-4d74-a3c0-06bfea05f227","Type":"ContainerDied","Data":"1abbb20b71f729c7c4eec46791df4b61176122d3e5f7c58df1304e9632f170d8"} Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.551979 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerStarted","Data":"6e395482c7c10f115aed05af2b36400203065a2c72a641b0e1b2700aa9bece9d"} Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.908440 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.978142 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68pmd\" (UniqueName: \"kubernetes.io/projected/2e107db2-5948-4a60-9745-59aae128e9b6-kube-api-access-68pmd\") pod \"2e107db2-5948-4a60-9745-59aae128e9b6\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.978256 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e107db2-5948-4a60-9745-59aae128e9b6-operator-scripts\") pod \"2e107db2-5948-4a60-9745-59aae128e9b6\" (UID: \"2e107db2-5948-4a60-9745-59aae128e9b6\") " Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.979377 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e107db2-5948-4a60-9745-59aae128e9b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e107db2-5948-4a60-9745-59aae128e9b6" (UID: "2e107db2-5948-4a60-9745-59aae128e9b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:06:06 crc kubenswrapper[4821]: I0309 19:06:06.983541 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e107db2-5948-4a60-9745-59aae128e9b6-kube-api-access-68pmd" (OuterVolumeSpecName: "kube-api-access-68pmd") pod "2e107db2-5948-4a60-9745-59aae128e9b6" (UID: "2e107db2-5948-4a60-9745-59aae128e9b6"). InnerVolumeSpecName "kube-api-access-68pmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.080125 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68pmd\" (UniqueName: \"kubernetes.io/projected/2e107db2-5948-4a60-9745-59aae128e9b6-kube-api-access-68pmd\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.080172 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e107db2-5948-4a60-9745-59aae128e9b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.572453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerStarted","Data":"cb55717034b32f5cec1e96e6f1a01467c1017528bda1b28cfe0c91a74bf06790"} Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.573385 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.577164 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-btk74" event={"ID":"2e107db2-5948-4a60-9745-59aae128e9b6","Type":"ContainerDied","Data":"9f58154eccbea0449a76262e89c1f21e71fec74011656a230a9414531319b848"} Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.577212 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f58154eccbea0449a76262e89c1f21e71fec74011656a230a9414531319b848" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.577180 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-btk74" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.845490 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.867426 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.131517612 podStartE2EDuration="5.867398309s" podCreationTimestamp="2026-03-09 19:06:02 +0000 UTC" firstStartedPulling="2026-03-09 19:06:03.573313832 +0000 UTC m=+2500.734689688" lastFinishedPulling="2026-03-09 19:06:07.309194529 +0000 UTC m=+2504.470570385" observedRunningTime="2026-03-09 19:06:07.608564793 +0000 UTC m=+2504.769940649" watchObservedRunningTime="2026-03-09 19:06:07.867398309 +0000 UTC m=+2505.028774165" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.892869 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4de929b-9a1a-4d74-a3c0-06bfea05f227-operator-scripts\") pod \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.892988 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-668md\" (UniqueName: \"kubernetes.io/projected/b4de929b-9a1a-4d74-a3c0-06bfea05f227-kube-api-access-668md\") pod \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\" (UID: \"b4de929b-9a1a-4d74-a3c0-06bfea05f227\") " Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.898860 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4de929b-9a1a-4d74-a3c0-06bfea05f227-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4de929b-9a1a-4d74-a3c0-06bfea05f227" (UID: "b4de929b-9a1a-4d74-a3c0-06bfea05f227"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.901463 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4de929b-9a1a-4d74-a3c0-06bfea05f227-kube-api-access-668md" (OuterVolumeSpecName: "kube-api-access-668md") pod "b4de929b-9a1a-4d74-a3c0-06bfea05f227" (UID: "b4de929b-9a1a-4d74-a3c0-06bfea05f227"). InnerVolumeSpecName "kube-api-access-668md". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.997277 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4de929b-9a1a-4d74-a3c0-06bfea05f227-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:07 crc kubenswrapper[4821]: I0309 19:06:07.997332 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-668md\" (UniqueName: \"kubernetes.io/projected/b4de929b-9a1a-4d74-a3c0-06bfea05f227-kube-api-access-668md\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.252775 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.302097 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-cert-memcached-mtls\") pod \"94909197-ed08-41db-ac95-ad9bfcd5df75\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.302242 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-config-data\") pod \"94909197-ed08-41db-ac95-ad9bfcd5df75\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.302917 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-custom-prometheus-ca\") pod \"94909197-ed08-41db-ac95-ad9bfcd5df75\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.302986 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq24w\" (UniqueName: \"kubernetes.io/projected/94909197-ed08-41db-ac95-ad9bfcd5df75-kube-api-access-jq24w\") pod \"94909197-ed08-41db-ac95-ad9bfcd5df75\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.303052 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-combined-ca-bundle\") pod \"94909197-ed08-41db-ac95-ad9bfcd5df75\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.303080 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94909197-ed08-41db-ac95-ad9bfcd5df75-logs\") pod \"94909197-ed08-41db-ac95-ad9bfcd5df75\" (UID: \"94909197-ed08-41db-ac95-ad9bfcd5df75\") " Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.304076 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94909197-ed08-41db-ac95-ad9bfcd5df75-logs" (OuterVolumeSpecName: "logs") pod "94909197-ed08-41db-ac95-ad9bfcd5df75" (UID: "94909197-ed08-41db-ac95-ad9bfcd5df75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.306736 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94909197-ed08-41db-ac95-ad9bfcd5df75-kube-api-access-jq24w" (OuterVolumeSpecName: "kube-api-access-jq24w") pod "94909197-ed08-41db-ac95-ad9bfcd5df75" (UID: "94909197-ed08-41db-ac95-ad9bfcd5df75"). InnerVolumeSpecName "kube-api-access-jq24w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.325376 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94909197-ed08-41db-ac95-ad9bfcd5df75" (UID: "94909197-ed08-41db-ac95-ad9bfcd5df75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.341717 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "94909197-ed08-41db-ac95-ad9bfcd5df75" (UID: "94909197-ed08-41db-ac95-ad9bfcd5df75"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.348936 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-config-data" (OuterVolumeSpecName: "config-data") pod "94909197-ed08-41db-ac95-ad9bfcd5df75" (UID: "94909197-ed08-41db-ac95-ad9bfcd5df75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.357102 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "94909197-ed08-41db-ac95-ad9bfcd5df75" (UID: "94909197-ed08-41db-ac95-ad9bfcd5df75"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.412301 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.412362 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.412377 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq24w\" (UniqueName: \"kubernetes.io/projected/94909197-ed08-41db-ac95-ad9bfcd5df75-kube-api-access-jq24w\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.412388 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.412403 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94909197-ed08-41db-ac95-ad9bfcd5df75-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.412412 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/94909197-ed08-41db-ac95-ad9bfcd5df75-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.606281 4821 generic.go:334] "Generic (PLEG): container finished" podID="94909197-ed08-41db-ac95-ad9bfcd5df75" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" exitCode=0 Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.606388 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"94909197-ed08-41db-ac95-ad9bfcd5df75","Type":"ContainerDied","Data":"ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac"} Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.606420 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"94909197-ed08-41db-ac95-ad9bfcd5df75","Type":"ContainerDied","Data":"1eba9bbb17180621f1d9aaef96770528d891aa7c78ec3d4d3dc83c415dfba553"} Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.606438 4821 scope.go:117] "RemoveContainer" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.608261 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.609179 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" event={"ID":"b4de929b-9a1a-4d74-a3c0-06bfea05f227","Type":"ContainerDied","Data":"759e5dfcc3ebb3ee4a90c536f9c047a4a78841be1f422ffb80becbea081686f3"} Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.609221 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="759e5dfcc3ebb3ee4a90c536f9c047a4a78841be1f422ffb80becbea081686f3" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.611060 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-jdcqq" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.658871 4821 scope.go:117] "RemoveContainer" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" Mar 09 19:06:08 crc kubenswrapper[4821]: E0309 19:06:08.660154 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac\": container with ID starting with ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac not found: ID does not exist" containerID="ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.660290 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac"} err="failed to get container status \"ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac\": rpc error: code = NotFound desc = could not find container \"ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac\": container with ID starting with ca24229478cd3033e84520fedbf7384567b58c7ce42f51e5b78c109ae524f5ac not found: ID does not exist" Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.676464 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:06:08 crc kubenswrapper[4821]: I0309 19:06:08.686207 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.145573 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-t72sq"] Mar 09 19:06:09 crc kubenswrapper[4821]: E0309 19:06:09.145995 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94909197-ed08-41db-ac95-ad9bfcd5df75" containerName="watcher-decision-engine" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146016 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="94909197-ed08-41db-ac95-ad9bfcd5df75" containerName="watcher-decision-engine" Mar 09 19:06:09 crc kubenswrapper[4821]: E0309 19:06:09.146029 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116854a1-ac31-4634-8373-53ce3889d5e0" containerName="oc" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146035 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="116854a1-ac31-4634-8373-53ce3889d5e0" containerName="oc" Mar 09 19:06:09 crc kubenswrapper[4821]: E0309 19:06:09.146057 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e107db2-5948-4a60-9745-59aae128e9b6" containerName="mariadb-database-create" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146063 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e107db2-5948-4a60-9745-59aae128e9b6" containerName="mariadb-database-create" Mar 09 19:06:09 crc kubenswrapper[4821]: E0309 19:06:09.146080 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4de929b-9a1a-4d74-a3c0-06bfea05f227" containerName="mariadb-account-create-update" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146085 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4de929b-9a1a-4d74-a3c0-06bfea05f227" containerName="mariadb-account-create-update" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146238 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="94909197-ed08-41db-ac95-ad9bfcd5df75" containerName="watcher-decision-engine" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146247 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e107db2-5948-4a60-9745-59aae128e9b6" containerName="mariadb-database-create" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146257 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="116854a1-ac31-4634-8373-53ce3889d5e0" containerName="oc" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146273 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4de929b-9a1a-4d74-a3c0-06bfea05f227" containerName="mariadb-account-create-update" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.146886 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.149022 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.149040 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cjvj9" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.164148 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-t72sq"] Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.231906 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-db-sync-config-data\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.231946 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-config-data\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.231972 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6vr8\" (UniqueName: \"kubernetes.io/projected/d52339b1-0145-4776-aa3d-c9d11a14ab26-kube-api-access-r6vr8\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.232416 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.334521 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.334586 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-db-sync-config-data\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.334619 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-config-data\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.334643 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6vr8\" (UniqueName: \"kubernetes.io/projected/d52339b1-0145-4776-aa3d-c9d11a14ab26-kube-api-access-r6vr8\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.350764 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-db-sync-config-data\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.351170 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-config-data\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.352931 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.355269 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6vr8\" (UniqueName: \"kubernetes.io/projected/d52339b1-0145-4776-aa3d-c9d11a14ab26-kube-api-access-r6vr8\") pod \"watcher-kuttl-db-sync-t72sq\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.465311 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.573066 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94909197-ed08-41db-ac95-ad9bfcd5df75" path="/var/lib/kubelet/pods/94909197-ed08-41db-ac95-ad9bfcd5df75/volumes" Mar 09 19:06:09 crc kubenswrapper[4821]: I0309 19:06:09.961226 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-t72sq"] Mar 09 19:06:09 crc kubenswrapper[4821]: W0309 19:06:09.961225 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd52339b1_0145_4776_aa3d_c9d11a14ab26.slice/crio-adeb670998f023e92fa8842e00c36545dab44a03eea0312eed3ee00afa1e6d7b WatchSource:0}: Error finding container adeb670998f023e92fa8842e00c36545dab44a03eea0312eed3ee00afa1e6d7b: Status 404 returned error can't find the container with id adeb670998f023e92fa8842e00c36545dab44a03eea0312eed3ee00afa1e6d7b Mar 09 19:06:10 crc kubenswrapper[4821]: I0309 19:06:10.637515 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" event={"ID":"d52339b1-0145-4776-aa3d-c9d11a14ab26","Type":"ContainerStarted","Data":"4e180a85cf5de8d6be76eb227c929cd0a8d2a2ffca0f8964dd741d5e4076eabc"} Mar 09 19:06:10 crc kubenswrapper[4821]: I0309 19:06:10.637854 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" event={"ID":"d52339b1-0145-4776-aa3d-c9d11a14ab26","Type":"ContainerStarted","Data":"adeb670998f023e92fa8842e00c36545dab44a03eea0312eed3ee00afa1e6d7b"} Mar 09 19:06:10 crc kubenswrapper[4821]: I0309 19:06:10.661977 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" podStartSLOduration=1.661951972 podStartE2EDuration="1.661951972s" podCreationTimestamp="2026-03-09 19:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:06:10.654217802 +0000 UTC m=+2507.815593668" watchObservedRunningTime="2026-03-09 19:06:10.661951972 +0000 UTC m=+2507.823327828" Mar 09 19:06:12 crc kubenswrapper[4821]: I0309 19:06:12.656115 4821 generic.go:334] "Generic (PLEG): container finished" podID="d52339b1-0145-4776-aa3d-c9d11a14ab26" containerID="4e180a85cf5de8d6be76eb227c929cd0a8d2a2ffca0f8964dd741d5e4076eabc" exitCode=0 Mar 09 19:06:12 crc kubenswrapper[4821]: I0309 19:06:12.656487 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" event={"ID":"d52339b1-0145-4776-aa3d-c9d11a14ab26","Type":"ContainerDied","Data":"4e180a85cf5de8d6be76eb227c929cd0a8d2a2ffca0f8964dd741d5e4076eabc"} Mar 09 19:06:12 crc kubenswrapper[4821]: E0309 19:06:12.695164 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd52339b1_0145_4776_aa3d_c9d11a14ab26.slice/crio-4e180a85cf5de8d6be76eb227c929cd0a8d2a2ffca0f8964dd741d5e4076eabc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd52339b1_0145_4776_aa3d_c9d11a14ab26.slice/crio-conmon-4e180a85cf5de8d6be76eb227c929cd0a8d2a2ffca0f8964dd741d5e4076eabc.scope\": RecentStats: unable to find data in memory cache]" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.048819 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.113617 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-db-sync-config-data\") pod \"d52339b1-0145-4776-aa3d-c9d11a14ab26\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.113735 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6vr8\" (UniqueName: \"kubernetes.io/projected/d52339b1-0145-4776-aa3d-c9d11a14ab26-kube-api-access-r6vr8\") pod \"d52339b1-0145-4776-aa3d-c9d11a14ab26\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.113782 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-combined-ca-bundle\") pod \"d52339b1-0145-4776-aa3d-c9d11a14ab26\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.113883 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-config-data\") pod \"d52339b1-0145-4776-aa3d-c9d11a14ab26\" (UID: \"d52339b1-0145-4776-aa3d-c9d11a14ab26\") " Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.127589 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52339b1-0145-4776-aa3d-c9d11a14ab26-kube-api-access-r6vr8" (OuterVolumeSpecName: "kube-api-access-r6vr8") pod "d52339b1-0145-4776-aa3d-c9d11a14ab26" (UID: "d52339b1-0145-4776-aa3d-c9d11a14ab26"). InnerVolumeSpecName "kube-api-access-r6vr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.127703 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d52339b1-0145-4776-aa3d-c9d11a14ab26" (UID: "d52339b1-0145-4776-aa3d-c9d11a14ab26"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.141215 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d52339b1-0145-4776-aa3d-c9d11a14ab26" (UID: "d52339b1-0145-4776-aa3d-c9d11a14ab26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.159576 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-config-data" (OuterVolumeSpecName: "config-data") pod "d52339b1-0145-4776-aa3d-c9d11a14ab26" (UID: "d52339b1-0145-4776-aa3d-c9d11a14ab26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.216349 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.216396 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6vr8\" (UniqueName: \"kubernetes.io/projected/d52339b1-0145-4776-aa3d-c9d11a14ab26-kube-api-access-r6vr8\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.216416 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.216434 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52339b1-0145-4776-aa3d-c9d11a14ab26-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.678901 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" event={"ID":"d52339b1-0145-4776-aa3d-c9d11a14ab26","Type":"ContainerDied","Data":"adeb670998f023e92fa8842e00c36545dab44a03eea0312eed3ee00afa1e6d7b"} Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.679256 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adeb670998f023e92fa8842e00c36545dab44a03eea0312eed3ee00afa1e6d7b" Mar 09 19:06:14 crc kubenswrapper[4821]: I0309 19:06:14.679035 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-t72sq" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.034774 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: E0309 19:06:15.035251 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52339b1-0145-4776-aa3d-c9d11a14ab26" containerName="watcher-kuttl-db-sync" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.035278 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52339b1-0145-4776-aa3d-c9d11a14ab26" containerName="watcher-kuttl-db-sync" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.035596 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52339b1-0145-4776-aa3d-c9d11a14ab26" containerName="watcher-kuttl-db-sync" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.036326 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.039983 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.040232 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cjvj9" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.046247 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.099500 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.100892 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.105119 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.126618 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130181 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130224 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130267 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130302 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130392 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130413 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564ea886-2427-45ab-be4a-adf79b21f4d7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130433 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gnj\" (UniqueName: \"kubernetes.io/projected/564ea886-2427-45ab-be4a-adf79b21f4d7-kube-api-access-b4gnj\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130617 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8pkm\" (UniqueName: \"kubernetes.io/projected/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-kube-api-access-s8pkm\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130728 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130786 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.130824 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.141836 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.143067 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.182060 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.191842 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.195566 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.233991 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234046 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234090 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234168 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564ea886-2427-45ab-be4a-adf79b21f4d7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234192 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4gnj\" (UniqueName: \"kubernetes.io/projected/564ea886-2427-45ab-be4a-adf79b21f4d7-kube-api-access-b4gnj\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234216 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234247 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234289 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234370 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8pkm\" (UniqueName: \"kubernetes.io/projected/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-kube-api-access-s8pkm\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234407 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kbc\" (UniqueName: \"kubernetes.io/projected/8e27cb8c-920a-4141-b783-51bf80dbb332-kube-api-access-l8kbc\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234439 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234500 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234533 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff71f3d-8bd3-495b-bf6b-427931799b9d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234581 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234604 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234627 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234659 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234684 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e27cb8c-920a-4141-b783-51bf80dbb332-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234704 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234734 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234774 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdz8f\" (UniqueName: \"kubernetes.io/projected/bff71f3d-8bd3-495b-bf6b-427931799b9d-kube-api-access-cdz8f\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234806 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.234886 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.241138 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564ea886-2427-45ab-be4a-adf79b21f4d7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.249418 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.260091 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.261416 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.262164 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.265799 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.269037 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.271828 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4gnj\" (UniqueName: \"kubernetes.io/projected/564ea886-2427-45ab-be4a-adf79b21f4d7-kube-api-access-b4gnj\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.274174 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.293922 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8pkm\" (UniqueName: \"kubernetes.io/projected/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-kube-api-access-s8pkm\") pod \"watcher-kuttl-applier-0\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.293957 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.293929 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336170 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336226 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e27cb8c-920a-4141-b783-51bf80dbb332-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336260 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdz8f\" (UniqueName: \"kubernetes.io/projected/bff71f3d-8bd3-495b-bf6b-427931799b9d-kube-api-access-cdz8f\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336298 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336346 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336366 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336392 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336416 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kbc\" (UniqueName: \"kubernetes.io/projected/8e27cb8c-920a-4141-b783-51bf80dbb332-kube-api-access-l8kbc\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336432 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336451 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336469 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff71f3d-8bd3-495b-bf6b-427931799b9d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.336485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.341876 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.342197 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e27cb8c-920a-4141-b783-51bf80dbb332-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.346722 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.348291 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff71f3d-8bd3-495b-bf6b-427931799b9d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.353128 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.353533 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.355556 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.355984 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.360006 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.362792 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.368806 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.370114 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdz8f\" (UniqueName: \"kubernetes.io/projected/bff71f3d-8bd3-495b-bf6b-427931799b9d-kube-api-access-cdz8f\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.377607 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kbc\" (UniqueName: \"kubernetes.io/projected/8e27cb8c-920a-4141-b783-51bf80dbb332-kube-api-access-l8kbc\") pod \"watcher-kuttl-api-1\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.418740 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.529141 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.549959 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.871269 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:06:15 crc kubenswrapper[4821]: W0309 19:06:15.874394 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25ba8ed_fec0_4e0d_9006_4aef28a83e53.slice/crio-ce3a641c527055dd737c4c248d7cf70d1171be4082da3e47f04e6ca753f48b02 WatchSource:0}: Error finding container ce3a641c527055dd737c4c248d7cf70d1171be4082da3e47f04e6ca753f48b02: Status 404 returned error can't find the container with id ce3a641c527055dd737c4c248d7cf70d1171be4082da3e47f04e6ca753f48b02 Mar 09 19:06:15 crc kubenswrapper[4821]: I0309 19:06:15.958265 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.038755 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:06:16 crc kubenswrapper[4821]: W0309 19:06:16.045465 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbff71f3d_8bd3_495b_bf6b_427931799b9d.slice/crio-7e0ddaef6237799ea77e2cbc0894981bf72e9a6750044dbfa9d28750d718faa1 WatchSource:0}: Error finding container 7e0ddaef6237799ea77e2cbc0894981bf72e9a6750044dbfa9d28750d718faa1: Status 404 returned error can't find the container with id 7e0ddaef6237799ea77e2cbc0894981bf72e9a6750044dbfa9d28750d718faa1 Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.046364 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:06:16 crc kubenswrapper[4821]: W0309 19:06:16.049734 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e27cb8c_920a_4141_b783_51bf80dbb332.slice/crio-808ad0ca931e7db0f9c3c625421f22ee61c47ec2afc43b82bdd0f8f6c375bb20 WatchSource:0}: Error finding container 808ad0ca931e7db0f9c3c625421f22ee61c47ec2afc43b82bdd0f8f6c375bb20: Status 404 returned error can't find the container with id 808ad0ca931e7db0f9c3c625421f22ee61c47ec2afc43b82bdd0f8f6c375bb20 Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.727687 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8e27cb8c-920a-4141-b783-51bf80dbb332","Type":"ContainerStarted","Data":"a123aaf4d2d51e46f945473c34ec79e761010258eaaf4b139fa83d06005ba950"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.729497 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8e27cb8c-920a-4141-b783-51bf80dbb332","Type":"ContainerStarted","Data":"72b1bee4798c5583f5266f146078855331e4ae76cbdee31dfcc328897ea9f41b"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.729644 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.729740 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8e27cb8c-920a-4141-b783-51bf80dbb332","Type":"ContainerStarted","Data":"808ad0ca931e7db0f9c3c625421f22ee61c47ec2afc43b82bdd0f8f6c375bb20"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.729826 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bff71f3d-8bd3-495b-bf6b-427931799b9d","Type":"ContainerStarted","Data":"802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.729926 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bff71f3d-8bd3-495b-bf6b-427931799b9d","Type":"ContainerStarted","Data":"7e0ddaef6237799ea77e2cbc0894981bf72e9a6750044dbfa9d28750d718faa1"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.731860 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"564ea886-2427-45ab-be4a-adf79b21f4d7","Type":"ContainerStarted","Data":"aeb7374ad6815648099b5dbd9255ff43c25f1da7e787a94b39e550868cff601a"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.731890 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"564ea886-2427-45ab-be4a-adf79b21f4d7","Type":"ContainerStarted","Data":"f9bf7181772b269f0f1f443e155b2675292f96909bf29853ad2be3205c28aa67"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.731904 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"564ea886-2427-45ab-be4a-adf79b21f4d7","Type":"ContainerStarted","Data":"fa3ba65314402e8e0a6dbd6642bca8477542d5434153900cb8fc61fcc5336f5c"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.732094 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.734470 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a25ba8ed-fec0-4e0d-9006-4aef28a83e53","Type":"ContainerStarted","Data":"94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.734593 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a25ba8ed-fec0-4e0d-9006-4aef28a83e53","Type":"ContainerStarted","Data":"ce3a641c527055dd737c4c248d7cf70d1171be4082da3e47f04e6ca753f48b02"} Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.743082 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=1.743067594 podStartE2EDuration="1.743067594s" podCreationTimestamp="2026-03-09 19:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:06:16.741235924 +0000 UTC m=+2513.902611780" watchObservedRunningTime="2026-03-09 19:06:16.743067594 +0000 UTC m=+2513.904443450" Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.766736 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.766718035 podStartE2EDuration="1.766718035s" podCreationTimestamp="2026-03-09 19:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:06:16.755147961 +0000 UTC m=+2513.916523827" watchObservedRunningTime="2026-03-09 19:06:16.766718035 +0000 UTC m=+2513.928093891" Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.791499 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.791478106 podStartE2EDuration="1.791478106s" podCreationTimestamp="2026-03-09 19:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:06:16.785287768 +0000 UTC m=+2513.946663614" watchObservedRunningTime="2026-03-09 19:06:16.791478106 +0000 UTC m=+2513.952853962" Mar 09 19:06:16 crc kubenswrapper[4821]: I0309 19:06:16.816973 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.816950336 podStartE2EDuration="1.816950336s" podCreationTimestamp="2026-03-09 19:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:06:16.812874996 +0000 UTC m=+2513.974250852" watchObservedRunningTime="2026-03-09 19:06:16.816950336 +0000 UTC m=+2513.978326192" Mar 09 19:06:18 crc kubenswrapper[4821]: I0309 19:06:18.748143 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 19:06:19 crc kubenswrapper[4821]: I0309 19:06:19.129939 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:19 crc kubenswrapper[4821]: I0309 19:06:19.298977 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:20 crc kubenswrapper[4821]: I0309 19:06:20.354633 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:20 crc kubenswrapper[4821]: I0309 19:06:20.419272 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:20 crc kubenswrapper[4821]: I0309 19:06:20.551635 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:25 crc kubenswrapper[4821]: E0309 19:06:25.262821 4821 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.74:56746->38.102.83.74:34185: read tcp 38.102.83.74:56746->38.102.83.74:34185: read: connection reset by peer Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.355208 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.385903 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.419241 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.425368 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.530848 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.562506 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.562777 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.562953 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.572375 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.854100 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.898850 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.909923 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:06:25 crc kubenswrapper[4821]: I0309 19:06:25.992736 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.180748 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.181516 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-central-agent" containerID="cri-o://a6cb2e632cc16aa4643e44670b555ba852c0af1798bcddee7b12d07fa6b68c31" gracePeriod=30 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.181635 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="sg-core" containerID="cri-o://6e395482c7c10f115aed05af2b36400203065a2c72a641b0e1b2700aa9bece9d" gracePeriod=30 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.181668 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="proxy-httpd" containerID="cri-o://cb55717034b32f5cec1e96e6f1a01467c1017528bda1b28cfe0c91a74bf06790" gracePeriod=30 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.181673 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-notification-agent" containerID="cri-o://3f67a15d99cc890e340b68b47a339cf4546d109309f9da7e2f5c903aa2bd08e2" gracePeriod=30 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.194683 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.228:3000/\": EOF" Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883199 4821 generic.go:334] "Generic (PLEG): container finished" podID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerID="cb55717034b32f5cec1e96e6f1a01467c1017528bda1b28cfe0c91a74bf06790" exitCode=0 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883232 4821 generic.go:334] "Generic (PLEG): container finished" podID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerID="6e395482c7c10f115aed05af2b36400203065a2c72a641b0e1b2700aa9bece9d" exitCode=2 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883240 4821 generic.go:334] "Generic (PLEG): container finished" podID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerID="3f67a15d99cc890e340b68b47a339cf4546d109309f9da7e2f5c903aa2bd08e2" exitCode=0 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883250 4821 generic.go:334] "Generic (PLEG): container finished" podID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerID="a6cb2e632cc16aa4643e44670b555ba852c0af1798bcddee7b12d07fa6b68c31" exitCode=0 Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerDied","Data":"cb55717034b32f5cec1e96e6f1a01467c1017528bda1b28cfe0c91a74bf06790"} Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883423 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerDied","Data":"6e395482c7c10f115aed05af2b36400203065a2c72a641b0e1b2700aa9bece9d"} Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883449 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerDied","Data":"3f67a15d99cc890e340b68b47a339cf4546d109309f9da7e2f5c903aa2bd08e2"} Mar 09 19:06:28 crc kubenswrapper[4821]: I0309 19:06:28.883468 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerDied","Data":"a6cb2e632cc16aa4643e44670b555ba852c0af1798bcddee7b12d07fa6b68c31"} Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.086486 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194432 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-combined-ca-bundle\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194502 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkkt9\" (UniqueName: \"kubernetes.io/projected/0187ac96-bd8c-4260-86be-1d2442b47dfa-kube-api-access-lkkt9\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194570 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-scripts\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194608 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-log-httpd\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194624 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-config-data\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194643 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-ceilometer-tls-certs\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194661 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-sg-core-conf-yaml\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.194709 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-run-httpd\") pod \"0187ac96-bd8c-4260-86be-1d2442b47dfa\" (UID: \"0187ac96-bd8c-4260-86be-1d2442b47dfa\") " Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.195239 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.195474 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.200640 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-scripts" (OuterVolumeSpecName: "scripts") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.200761 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0187ac96-bd8c-4260-86be-1d2442b47dfa-kube-api-access-lkkt9" (OuterVolumeSpecName: "kube-api-access-lkkt9") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "kube-api-access-lkkt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.225599 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.266779 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.288746 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297702 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297750 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297770 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297788 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0187ac96-bd8c-4260-86be-1d2442b47dfa-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297804 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297820 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkkt9\" (UniqueName: \"kubernetes.io/projected/0187ac96-bd8c-4260-86be-1d2442b47dfa-kube-api-access-lkkt9\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.297835 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.316425 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-config-data" (OuterVolumeSpecName: "config-data") pod "0187ac96-bd8c-4260-86be-1d2442b47dfa" (UID: "0187ac96-bd8c-4260-86be-1d2442b47dfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.399061 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0187ac96-bd8c-4260-86be-1d2442b47dfa-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.900257 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0187ac96-bd8c-4260-86be-1d2442b47dfa","Type":"ContainerDied","Data":"121ebf3e10a6b165bd9943083cfdb545847a475d4a2405c4e88846fba360336e"} Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.900340 4821 scope.go:117] "RemoveContainer" containerID="cb55717034b32f5cec1e96e6f1a01467c1017528bda1b28cfe0c91a74bf06790" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.900399 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.913675 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.913729 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.913770 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.914398 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.914450 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" gracePeriod=600 Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.941774 4821 scope.go:117] "RemoveContainer" containerID="6e395482c7c10f115aed05af2b36400203065a2c72a641b0e1b2700aa9bece9d" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.950477 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.976443 4821 scope.go:117] "RemoveContainer" containerID="3f67a15d99cc890e340b68b47a339cf4546d109309f9da7e2f5c903aa2bd08e2" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.990610 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.997224 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:29 crc kubenswrapper[4821]: E0309 19:06:29.997644 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-central-agent" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.997662 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-central-agent" Mar 09 19:06:29 crc kubenswrapper[4821]: E0309 19:06:29.997759 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="proxy-httpd" Mar 09 19:06:29 crc kubenswrapper[4821]: I0309 19:06:29.997768 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="proxy-httpd" Mar 09 19:06:30 crc kubenswrapper[4821]: E0309 19:06:30.000376 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-notification-agent" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.000402 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-notification-agent" Mar 09 19:06:30 crc kubenswrapper[4821]: E0309 19:06:30.000423 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="sg-core" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.000430 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="sg-core" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.000647 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="proxy-httpd" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.000662 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="sg-core" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.000681 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-notification-agent" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.000690 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" containerName="ceilometer-central-agent" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.004897 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.007645 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.007839 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.018057 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.020923 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.023006 4821 scope.go:117] "RemoveContainer" containerID="a6cb2e632cc16aa4643e44670b555ba852c0af1798bcddee7b12d07fa6b68c31" Mar 09 19:06:30 crc kubenswrapper[4821]: E0309 19:06:30.077156 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.109929 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-log-httpd\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.109989 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqs8t\" (UniqueName: \"kubernetes.io/projected/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-kube-api-access-rqs8t\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.110019 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-run-httpd\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.110098 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.110201 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.110281 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.110513 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.110574 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-scripts\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.211510 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-log-httpd\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.212438 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqs8t\" (UniqueName: \"kubernetes.io/projected/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-kube-api-access-rqs8t\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.212553 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-run-httpd\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.212628 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.212707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.212796 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.212958 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.213067 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-scripts\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.214438 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-log-httpd\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.214480 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-run-httpd\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.216871 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.217530 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-scripts\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.218302 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.218721 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.220314 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.237031 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqs8t\" (UniqueName: \"kubernetes.io/projected/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-kube-api-access-rqs8t\") pod \"ceilometer-0\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:30 crc kubenswrapper[4821]: I0309 19:06:30.328591 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:30.792471 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:06:31 crc kubenswrapper[4821]: W0309 19:06:30.803555 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51cb0cfb_c1c0_488d_bf4e_a9d1f6993b83.slice/crio-76879f0ed5eb771b5fb4137deef56f376561a0d94abef12eaa238be2304ee7ff WatchSource:0}: Error finding container 76879f0ed5eb771b5fb4137deef56f376561a0d94abef12eaa238be2304ee7ff: Status 404 returned error can't find the container with id 76879f0ed5eb771b5fb4137deef56f376561a0d94abef12eaa238be2304ee7ff Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:30.909658 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" exitCode=0 Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:30.909719 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8"} Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:30.909759 4821 scope.go:117] "RemoveContainer" containerID="f6f924e73c0d96463d23d74c00c469a04a44dafcfce63f7df228acf99a8d74b6" Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:30.910344 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:06:31 crc kubenswrapper[4821]: E0309 19:06:30.910583 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:30.912886 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerStarted","Data":"76879f0ed5eb771b5fb4137deef56f376561a0d94abef12eaa238be2304ee7ff"} Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:31.563011 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0187ac96-bd8c-4260-86be-1d2442b47dfa" path="/var/lib/kubelet/pods/0187ac96-bd8c-4260-86be-1d2442b47dfa/volumes" Mar 09 19:06:31 crc kubenswrapper[4821]: I0309 19:06:31.941010 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerStarted","Data":"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd"} Mar 09 19:06:32 crc kubenswrapper[4821]: I0309 19:06:32.951169 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerStarted","Data":"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b"} Mar 09 19:06:32 crc kubenswrapper[4821]: I0309 19:06:32.951538 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerStarted","Data":"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d"} Mar 09 19:06:35 crc kubenswrapper[4821]: I0309 19:06:35.983838 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerStarted","Data":"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e"} Mar 09 19:06:35 crc kubenswrapper[4821]: I0309 19:06:35.984461 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:06:36 crc kubenswrapper[4821]: I0309 19:06:36.016275 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.706280827 podStartE2EDuration="7.016253404s" podCreationTimestamp="2026-03-09 19:06:29 +0000 UTC" firstStartedPulling="2026-03-09 19:06:30.805842651 +0000 UTC m=+2527.967218507" lastFinishedPulling="2026-03-09 19:06:35.115815218 +0000 UTC m=+2532.277191084" observedRunningTime="2026-03-09 19:06:36.007755633 +0000 UTC m=+2533.169131489" watchObservedRunningTime="2026-03-09 19:06:36.016253404 +0000 UTC m=+2533.177629260" Mar 09 19:06:45 crc kubenswrapper[4821]: I0309 19:06:45.552003 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:06:45 crc kubenswrapper[4821]: E0309 19:06:45.553064 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:06:48 crc kubenswrapper[4821]: I0309 19:06:48.676413 4821 scope.go:117] "RemoveContainer" containerID="bc3dc371aea2c912a2dc9d2d3d391ec5cb375f0323a8ff51996da645258ab703" Mar 09 19:06:57 crc kubenswrapper[4821]: I0309 19:06:57.552284 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:06:57 crc kubenswrapper[4821]: E0309 19:06:57.553241 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.165739 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz"] Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.196109 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.198536 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.202014 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.207517 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz"] Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.308954 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-config-data\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.309054 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq26h\" (UniqueName: \"kubernetes.io/projected/1e80daff-b456-4deb-b242-2e3aa177bd4c-kube-api-access-tq26h\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.309372 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.309474 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-scripts-volume\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.350170 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.412753 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-config-data\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.413093 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq26h\" (UniqueName: \"kubernetes.io/projected/1e80daff-b456-4deb-b242-2e3aa177bd4c-kube-api-access-tq26h\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.413266 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.413510 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-scripts-volume\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.435538 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-scripts-volume\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.435678 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-config-data\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.438353 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq26h\" (UniqueName: \"kubernetes.io/projected/1e80daff-b456-4deb-b242-2e3aa177bd4c-kube-api-access-tq26h\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.443623 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29551387-6qvhz\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.527867 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:00 crc kubenswrapper[4821]: I0309 19:07:00.982620 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz"] Mar 09 19:07:01 crc kubenswrapper[4821]: I0309 19:07:01.226929 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" event={"ID":"1e80daff-b456-4deb-b242-2e3aa177bd4c","Type":"ContainerStarted","Data":"d4056539974d5c9e0205415939b70cd305fa2b5d71776ca9cedac0d1650e5b2f"} Mar 09 19:07:01 crc kubenswrapper[4821]: I0309 19:07:01.227263 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" event={"ID":"1e80daff-b456-4deb-b242-2e3aa177bd4c","Type":"ContainerStarted","Data":"871ede49441fb5c2614eb802ffeac736eb91ad2e3c6e8d5a26641f14d06f88d7"} Mar 09 19:07:01 crc kubenswrapper[4821]: I0309 19:07:01.243572 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" podStartSLOduration=1.243556305 podStartE2EDuration="1.243556305s" podCreationTimestamp="2026-03-09 19:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:07:01.240474741 +0000 UTC m=+2558.401850617" watchObservedRunningTime="2026-03-09 19:07:01.243556305 +0000 UTC m=+2558.404932161" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.752602 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8z7mt"] Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.757036 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.781250 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z7mt"] Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.873770 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzzv2\" (UniqueName: \"kubernetes.io/projected/6d88f413-e5b1-4102-837f-fa3f5ca953f0-kube-api-access-kzzv2\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.873839 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-utilities\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.874233 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-catalog-content\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.975841 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-catalog-content\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.975911 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzzv2\" (UniqueName: \"kubernetes.io/projected/6d88f413-e5b1-4102-837f-fa3f5ca953f0-kube-api-access-kzzv2\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.975953 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-utilities\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.976663 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-utilities\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.976939 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-catalog-content\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:03 crc kubenswrapper[4821]: I0309 19:07:03.998530 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzzv2\" (UniqueName: \"kubernetes.io/projected/6d88f413-e5b1-4102-837f-fa3f5ca953f0-kube-api-access-kzzv2\") pod \"community-operators-8z7mt\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:04 crc kubenswrapper[4821]: I0309 19:07:04.085766 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:04 crc kubenswrapper[4821]: I0309 19:07:04.296776 4821 generic.go:334] "Generic (PLEG): container finished" podID="1e80daff-b456-4deb-b242-2e3aa177bd4c" containerID="d4056539974d5c9e0205415939b70cd305fa2b5d71776ca9cedac0d1650e5b2f" exitCode=0 Mar 09 19:07:04 crc kubenswrapper[4821]: I0309 19:07:04.297022 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" event={"ID":"1e80daff-b456-4deb-b242-2e3aa177bd4c","Type":"ContainerDied","Data":"d4056539974d5c9e0205415939b70cd305fa2b5d71776ca9cedac0d1650e5b2f"} Mar 09 19:07:04 crc kubenswrapper[4821]: I0309 19:07:04.662086 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8z7mt"] Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.305946 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerID="161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae" exitCode=0 Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.306021 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerDied","Data":"161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae"} Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.306078 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerStarted","Data":"6d1f9cef5ab3978d25f6df06b34fa3521e43a865ab605ce487829634fa274f45"} Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.647736 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.706897 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq26h\" (UniqueName: \"kubernetes.io/projected/1e80daff-b456-4deb-b242-2e3aa177bd4c-kube-api-access-tq26h\") pod \"1e80daff-b456-4deb-b242-2e3aa177bd4c\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.707000 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-combined-ca-bundle\") pod \"1e80daff-b456-4deb-b242-2e3aa177bd4c\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.707040 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-config-data\") pod \"1e80daff-b456-4deb-b242-2e3aa177bd4c\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.707075 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-scripts-volume\") pod \"1e80daff-b456-4deb-b242-2e3aa177bd4c\" (UID: \"1e80daff-b456-4deb-b242-2e3aa177bd4c\") " Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.712343 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e80daff-b456-4deb-b242-2e3aa177bd4c-kube-api-access-tq26h" (OuterVolumeSpecName: "kube-api-access-tq26h") pod "1e80daff-b456-4deb-b242-2e3aa177bd4c" (UID: "1e80daff-b456-4deb-b242-2e3aa177bd4c"). InnerVolumeSpecName "kube-api-access-tq26h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.712793 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "1e80daff-b456-4deb-b242-2e3aa177bd4c" (UID: "1e80daff-b456-4deb-b242-2e3aa177bd4c"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.741373 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e80daff-b456-4deb-b242-2e3aa177bd4c" (UID: "1e80daff-b456-4deb-b242-2e3aa177bd4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.770901 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-config-data" (OuterVolumeSpecName: "config-data") pod "1e80daff-b456-4deb-b242-2e3aa177bd4c" (UID: "1e80daff-b456-4deb-b242-2e3aa177bd4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.809778 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq26h\" (UniqueName: \"kubernetes.io/projected/1e80daff-b456-4deb-b242-2e3aa177bd4c-kube-api-access-tq26h\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.809816 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.809828 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:05 crc kubenswrapper[4821]: I0309 19:07:05.809838 4821 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/1e80daff-b456-4deb-b242-2e3aa177bd4c-scripts-volume\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:06 crc kubenswrapper[4821]: I0309 19:07:06.318449 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerStarted","Data":"21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c"} Mar 09 19:07:06 crc kubenswrapper[4821]: I0309 19:07:06.321633 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" event={"ID":"1e80daff-b456-4deb-b242-2e3aa177bd4c","Type":"ContainerDied","Data":"871ede49441fb5c2614eb802ffeac736eb91ad2e3c6e8d5a26641f14d06f88d7"} Mar 09 19:07:06 crc kubenswrapper[4821]: I0309 19:07:06.321670 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="871ede49441fb5c2614eb802ffeac736eb91ad2e3c6e8d5a26641f14d06f88d7" Mar 09 19:07:06 crc kubenswrapper[4821]: I0309 19:07:06.321720 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz" Mar 09 19:07:07 crc kubenswrapper[4821]: I0309 19:07:07.332885 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerID="21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c" exitCode=0 Mar 09 19:07:07 crc kubenswrapper[4821]: I0309 19:07:07.332980 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerDied","Data":"21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c"} Mar 09 19:07:08 crc kubenswrapper[4821]: I0309 19:07:08.344076 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerStarted","Data":"efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641"} Mar 09 19:07:08 crc kubenswrapper[4821]: I0309 19:07:08.361633 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8z7mt" podStartSLOduration=2.91424238 podStartE2EDuration="5.361614802s" podCreationTimestamp="2026-03-09 19:07:03 +0000 UTC" firstStartedPulling="2026-03-09 19:07:05.307370391 +0000 UTC m=+2562.468746247" lastFinishedPulling="2026-03-09 19:07:07.754742813 +0000 UTC m=+2564.916118669" observedRunningTime="2026-03-09 19:07:08.35930772 +0000 UTC m=+2565.520683596" watchObservedRunningTime="2026-03-09 19:07:08.361614802 +0000 UTC m=+2565.522990668" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.002879 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-t72sq"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.014493 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-t72sq"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.030543 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.051286 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29551387-6qvhz"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.059575 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-m59cp"] Mar 09 19:07:09 crc kubenswrapper[4821]: E0309 19:07:09.060030 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e80daff-b456-4deb-b242-2e3aa177bd4c" containerName="watcher-db-manage" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.060057 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e80daff-b456-4deb-b242-2e3aa177bd4c" containerName="watcher-db-manage" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.060256 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e80daff-b456-4deb-b242-2e3aa177bd4c" containerName="watcher-db-manage" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.061114 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.069270 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-m59cp"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.100534 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.100878 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="bff71f3d-8bd3-495b-bf6b-427931799b9d" containerName="watcher-decision-engine" containerID="cri-o://802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14" gracePeriod=30 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.134578 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.134834 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" containerName="watcher-applier" containerID="cri-o://94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" gracePeriod=30 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.178163 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.178475 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-kuttl-api-log" containerID="cri-o://72b1bee4798c5583f5266f146078855331e4ae76cbdee31dfcc328897ea9f41b" gracePeriod=30 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.178865 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-api" containerID="cri-o://a123aaf4d2d51e46f945473c34ec79e761010258eaaf4b139fa83d06005ba950" gracePeriod=30 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.186671 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.186906 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-kuttl-api-log" containerID="cri-o://f9bf7181772b269f0f1f443e155b2675292f96909bf29853ad2be3205c28aa67" gracePeriod=30 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.187299 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-api" containerID="cri-o://aeb7374ad6815648099b5dbd9255ff43c25f1da7e787a94b39e550868cff601a" gracePeriod=30 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.202836 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km7x2\" (UniqueName: \"kubernetes.io/projected/09f03aaf-e82a-4216-88cc-a79293d41916-kube-api-access-km7x2\") pod \"watchertest-account-delete-m59cp\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.202995 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f03aaf-e82a-4216-88cc-a79293d41916-operator-scripts\") pod \"watchertest-account-delete-m59cp\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.304920 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f03aaf-e82a-4216-88cc-a79293d41916-operator-scripts\") pod \"watchertest-account-delete-m59cp\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.304991 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km7x2\" (UniqueName: \"kubernetes.io/projected/09f03aaf-e82a-4216-88cc-a79293d41916-kube-api-access-km7x2\") pod \"watchertest-account-delete-m59cp\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.305995 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f03aaf-e82a-4216-88cc-a79293d41916-operator-scripts\") pod \"watchertest-account-delete-m59cp\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.328507 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km7x2\" (UniqueName: \"kubernetes.io/projected/09f03aaf-e82a-4216-88cc-a79293d41916-kube-api-access-km7x2\") pod \"watchertest-account-delete-m59cp\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.357817 4821 generic.go:334] "Generic (PLEG): container finished" podID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerID="72b1bee4798c5583f5266f146078855331e4ae76cbdee31dfcc328897ea9f41b" exitCode=143 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.357887 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8e27cb8c-920a-4141-b783-51bf80dbb332","Type":"ContainerDied","Data":"72b1bee4798c5583f5266f146078855331e4ae76cbdee31dfcc328897ea9f41b"} Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.361091 4821 generic.go:334] "Generic (PLEG): container finished" podID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerID="f9bf7181772b269f0f1f443e155b2675292f96909bf29853ad2be3205c28aa67" exitCode=143 Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.362231 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"564ea886-2427-45ab-be4a-adf79b21f4d7","Type":"ContainerDied","Data":"f9bf7181772b269f0f1f443e155b2675292f96909bf29853ad2be3205c28aa67"} Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.406027 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.576194 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e80daff-b456-4deb-b242-2e3aa177bd4c" path="/var/lib/kubelet/pods/1e80daff-b456-4deb-b242-2e3aa177bd4c/volumes" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.576715 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52339b1-0145-4776-aa3d-c9d11a14ab26" path="/var/lib/kubelet/pods/d52339b1-0145-4776-aa3d-c9d11a14ab26/volumes" Mar 09 19:07:09 crc kubenswrapper[4821]: I0309 19:07:09.956548 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-m59cp"] Mar 09 19:07:09 crc kubenswrapper[4821]: W0309 19:07:09.957417 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f03aaf_e82a_4216_88cc_a79293d41916.slice/crio-ed71ca2e12d17e6b9b1a4d81e1547cd832ff0133c48fde09562fe64692b6f086 WatchSource:0}: Error finding container ed71ca2e12d17e6b9b1a4d81e1547cd832ff0133c48fde09562fe64692b6f086: Status 404 returned error can't find the container with id ed71ca2e12d17e6b9b1a4d81e1547cd832ff0133c48fde09562fe64692b6f086 Mar 09 19:07:10 crc kubenswrapper[4821]: E0309 19:07:10.364195 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:07:10 crc kubenswrapper[4821]: E0309 19:07:10.373275 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.384087 4821 generic.go:334] "Generic (PLEG): container finished" podID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerID="a123aaf4d2d51e46f945473c34ec79e761010258eaaf4b139fa83d06005ba950" exitCode=0 Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.384197 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8e27cb8c-920a-4141-b783-51bf80dbb332","Type":"ContainerDied","Data":"a123aaf4d2d51e46f945473c34ec79e761010258eaaf4b139fa83d06005ba950"} Mar 09 19:07:10 crc kubenswrapper[4821]: E0309 19:07:10.406489 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:07:10 crc kubenswrapper[4821]: E0309 19:07:10.406566 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" containerName="watcher-applier" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.406972 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" event={"ID":"09f03aaf-e82a-4216-88cc-a79293d41916","Type":"ContainerStarted","Data":"e58da97db5f5517ac5c5fc5c87d375a3a6ac05301c51a187b1469735bab08300"} Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.407009 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" event={"ID":"09f03aaf-e82a-4216-88cc-a79293d41916","Type":"ContainerStarted","Data":"ed71ca2e12d17e6b9b1a4d81e1547cd832ff0133c48fde09562fe64692b6f086"} Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.450908 4821 generic.go:334] "Generic (PLEG): container finished" podID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerID="aeb7374ad6815648099b5dbd9255ff43c25f1da7e787a94b39e550868cff601a" exitCode=0 Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.450952 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"564ea886-2427-45ab-be4a-adf79b21f4d7","Type":"ContainerDied","Data":"aeb7374ad6815648099b5dbd9255ff43c25f1da7e787a94b39e550868cff601a"} Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.771371 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.776158 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.799297 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" podStartSLOduration=1.799277623 podStartE2EDuration="1.799277623s" podCreationTimestamp="2026-03-09 19:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:07:10.45050578 +0000 UTC m=+2567.611881656" watchObservedRunningTime="2026-03-09 19:07:10.799277623 +0000 UTC m=+2567.960653479" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848517 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8kbc\" (UniqueName: \"kubernetes.io/projected/8e27cb8c-920a-4141-b783-51bf80dbb332-kube-api-access-l8kbc\") pod \"8e27cb8c-920a-4141-b783-51bf80dbb332\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848573 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848601 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-combined-ca-bundle\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848631 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-config-data\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848670 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-cert-memcached-mtls\") pod \"8e27cb8c-920a-4141-b783-51bf80dbb332\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848695 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4gnj\" (UniqueName: \"kubernetes.io/projected/564ea886-2427-45ab-be4a-adf79b21f4d7-kube-api-access-b4gnj\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848717 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e27cb8c-920a-4141-b783-51bf80dbb332-logs\") pod \"8e27cb8c-920a-4141-b783-51bf80dbb332\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848781 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-custom-prometheus-ca\") pod \"8e27cb8c-920a-4141-b783-51bf80dbb332\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848807 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-combined-ca-bundle\") pod \"8e27cb8c-920a-4141-b783-51bf80dbb332\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848887 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-cert-memcached-mtls\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848905 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-config-data\") pod \"8e27cb8c-920a-4141-b783-51bf80dbb332\" (UID: \"8e27cb8c-920a-4141-b783-51bf80dbb332\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.848938 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564ea886-2427-45ab-be4a-adf79b21f4d7-logs\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.851193 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564ea886-2427-45ab-be4a-adf79b21f4d7-logs" (OuterVolumeSpecName: "logs") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.857647 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e27cb8c-920a-4141-b783-51bf80dbb332-logs" (OuterVolumeSpecName: "logs") pod "8e27cb8c-920a-4141-b783-51bf80dbb332" (UID: "8e27cb8c-920a-4141-b783-51bf80dbb332"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.872662 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e27cb8c-920a-4141-b783-51bf80dbb332-kube-api-access-l8kbc" (OuterVolumeSpecName: "kube-api-access-l8kbc") pod "8e27cb8c-920a-4141-b783-51bf80dbb332" (UID: "8e27cb8c-920a-4141-b783-51bf80dbb332"). InnerVolumeSpecName "kube-api-access-l8kbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.885581 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564ea886-2427-45ab-be4a-adf79b21f4d7-kube-api-access-b4gnj" (OuterVolumeSpecName: "kube-api-access-b4gnj") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "kube-api-access-b4gnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.902577 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.907523 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e27cb8c-920a-4141-b783-51bf80dbb332" (UID: "8e27cb8c-920a-4141-b783-51bf80dbb332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.934474 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8e27cb8c-920a-4141-b783-51bf80dbb332" (UID: "8e27cb8c-920a-4141-b783-51bf80dbb332"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.946647 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-config-data" (OuterVolumeSpecName: "config-data") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.950993 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.952033 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca\") pod \"564ea886-2427-45ab-be4a-adf79b21f4d7\" (UID: \"564ea886-2427-45ab-be4a-adf79b21f4d7\") " Mar 09 19:07:10 crc kubenswrapper[4821]: W0309 19:07:10.952166 4821 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/564ea886-2427-45ab-be4a-adf79b21f4d7/volumes/kubernetes.io~secret/custom-prometheus-ca Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.952634 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.952939 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4gnj\" (UniqueName: \"kubernetes.io/projected/564ea886-2427-45ab-be4a-adf79b21f4d7-kube-api-access-b4gnj\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.954773 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e27cb8c-920a-4141-b783-51bf80dbb332-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.954846 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.954897 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.954946 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564ea886-2427-45ab-be4a-adf79b21f4d7-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.955002 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8kbc\" (UniqueName: \"kubernetes.io/projected/8e27cb8c-920a-4141-b783-51bf80dbb332-kube-api-access-l8kbc\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.955052 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.955111 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.955160 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.958446 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-config-data" (OuterVolumeSpecName: "config-data") pod "8e27cb8c-920a-4141-b783-51bf80dbb332" (UID: "8e27cb8c-920a-4141-b783-51bf80dbb332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:10 crc kubenswrapper[4821]: I0309 19:07:10.996614 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "564ea886-2427-45ab-be4a-adf79b21f4d7" (UID: "564ea886-2427-45ab-be4a-adf79b21f4d7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.002064 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8e27cb8c-920a-4141-b783-51bf80dbb332" (UID: "8e27cb8c-920a-4141-b783-51bf80dbb332"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.056444 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.056476 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/564ea886-2427-45ab-be4a-adf79b21f4d7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.056485 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e27cb8c-920a-4141-b783-51bf80dbb332-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.205735 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.360947 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-cert-memcached-mtls\") pod \"bff71f3d-8bd3-495b-bf6b-427931799b9d\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.361071 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff71f3d-8bd3-495b-bf6b-427931799b9d-logs\") pod \"bff71f3d-8bd3-495b-bf6b-427931799b9d\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.361128 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdz8f\" (UniqueName: \"kubernetes.io/projected/bff71f3d-8bd3-495b-bf6b-427931799b9d-kube-api-access-cdz8f\") pod \"bff71f3d-8bd3-495b-bf6b-427931799b9d\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.361725 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-custom-prometheus-ca\") pod \"bff71f3d-8bd3-495b-bf6b-427931799b9d\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.361740 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff71f3d-8bd3-495b-bf6b-427931799b9d-logs" (OuterVolumeSpecName: "logs") pod "bff71f3d-8bd3-495b-bf6b-427931799b9d" (UID: "bff71f3d-8bd3-495b-bf6b-427931799b9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.361760 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-combined-ca-bundle\") pod \"bff71f3d-8bd3-495b-bf6b-427931799b9d\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.361880 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-config-data\") pod \"bff71f3d-8bd3-495b-bf6b-427931799b9d\" (UID: \"bff71f3d-8bd3-495b-bf6b-427931799b9d\") " Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.362696 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bff71f3d-8bd3-495b-bf6b-427931799b9d-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.366149 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff71f3d-8bd3-495b-bf6b-427931799b9d-kube-api-access-cdz8f" (OuterVolumeSpecName: "kube-api-access-cdz8f") pod "bff71f3d-8bd3-495b-bf6b-427931799b9d" (UID: "bff71f3d-8bd3-495b-bf6b-427931799b9d"). InnerVolumeSpecName "kube-api-access-cdz8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.386472 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bff71f3d-8bd3-495b-bf6b-427931799b9d" (UID: "bff71f3d-8bd3-495b-bf6b-427931799b9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.398976 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bff71f3d-8bd3-495b-bf6b-427931799b9d" (UID: "bff71f3d-8bd3-495b-bf6b-427931799b9d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.408140 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-config-data" (OuterVolumeSpecName: "config-data") pod "bff71f3d-8bd3-495b-bf6b-427931799b9d" (UID: "bff71f3d-8bd3-495b-bf6b-427931799b9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.425173 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "bff71f3d-8bd3-495b-bf6b-427931799b9d" (UID: "bff71f3d-8bd3-495b-bf6b-427931799b9d"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.461504 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.462306 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8e27cb8c-920a-4141-b783-51bf80dbb332","Type":"ContainerDied","Data":"808ad0ca931e7db0f9c3c625421f22ee61c47ec2afc43b82bdd0f8f6c375bb20"} Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.462448 4821 scope.go:117] "RemoveContainer" containerID="a123aaf4d2d51e46f945473c34ec79e761010258eaaf4b139fa83d06005ba950" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.463653 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdz8f\" (UniqueName: \"kubernetes.io/projected/bff71f3d-8bd3-495b-bf6b-427931799b9d-kube-api-access-cdz8f\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.463682 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.463691 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.463700 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.463712 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bff71f3d-8bd3-495b-bf6b-427931799b9d-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.465169 4821 generic.go:334] "Generic (PLEG): container finished" podID="09f03aaf-e82a-4216-88cc-a79293d41916" containerID="e58da97db5f5517ac5c5fc5c87d375a3a6ac05301c51a187b1469735bab08300" exitCode=0 Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.465238 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" event={"ID":"09f03aaf-e82a-4216-88cc-a79293d41916","Type":"ContainerDied","Data":"e58da97db5f5517ac5c5fc5c87d375a3a6ac05301c51a187b1469735bab08300"} Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.469442 4821 generic.go:334] "Generic (PLEG): container finished" podID="bff71f3d-8bd3-495b-bf6b-427931799b9d" containerID="802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14" exitCode=0 Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.469515 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bff71f3d-8bd3-495b-bf6b-427931799b9d","Type":"ContainerDied","Data":"802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14"} Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.469544 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bff71f3d-8bd3-495b-bf6b-427931799b9d","Type":"ContainerDied","Data":"7e0ddaef6237799ea77e2cbc0894981bf72e9a6750044dbfa9d28750d718faa1"} Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.469602 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.481949 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"564ea886-2427-45ab-be4a-adf79b21f4d7","Type":"ContainerDied","Data":"fa3ba65314402e8e0a6dbd6642bca8477542d5434153900cb8fc61fcc5336f5c"} Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.482041 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.513230 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.514945 4821 scope.go:117] "RemoveContainer" containerID="72b1bee4798c5583f5266f146078855331e4ae76cbdee31dfcc328897ea9f41b" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.533856 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.534991 4821 scope.go:117] "RemoveContainer" containerID="802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.543851 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.560178 4821 scope.go:117] "RemoveContainer" containerID="802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14" Mar 09 19:07:11 crc kubenswrapper[4821]: E0309 19:07:11.560652 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14\": container with ID starting with 802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14 not found: ID does not exist" containerID="802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.560699 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14"} err="failed to get container status \"802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14\": rpc error: code = NotFound desc = could not find container \"802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14\": container with ID starting with 802aac49d2c4f22330c10d60fffafe11d9635ef9d07e0f5e9ba9341e2ee8ca14 not found: ID does not exist" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.560724 4821 scope.go:117] "RemoveContainer" containerID="aeb7374ad6815648099b5dbd9255ff43c25f1da7e787a94b39e550868cff601a" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.561939 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff71f3d-8bd3-495b-bf6b-427931799b9d" path="/var/lib/kubelet/pods/bff71f3d-8bd3-495b-bf6b-427931799b9d/volumes" Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.562517 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.563361 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.577837 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:07:11 crc kubenswrapper[4821]: I0309 19:07:11.582610 4821 scope.go:117] "RemoveContainer" containerID="f9bf7181772b269f0f1f443e155b2675292f96909bf29853ad2be3205c28aa67" Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.157252 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.157629 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-central-agent" containerID="cri-o://08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd" gracePeriod=30 Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.157795 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="proxy-httpd" containerID="cri-o://604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e" gracePeriod=30 Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.157817 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-notification-agent" containerID="cri-o://cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d" gracePeriod=30 Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.158076 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="sg-core" containerID="cri-o://ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b" gracePeriod=30 Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.497300 4821 generic.go:334] "Generic (PLEG): container finished" podID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerID="604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e" exitCode=0 Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.497368 4821 generic.go:334] "Generic (PLEG): container finished" podID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerID="ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b" exitCode=2 Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.497395 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerDied","Data":"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e"} Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.497429 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerDied","Data":"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b"} Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.551764 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:07:12 crc kubenswrapper[4821]: E0309 19:07:12.552407 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.820803 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.938754 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.989432 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f03aaf-e82a-4216-88cc-a79293d41916-operator-scripts\") pod \"09f03aaf-e82a-4216-88cc-a79293d41916\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.989541 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km7x2\" (UniqueName: \"kubernetes.io/projected/09f03aaf-e82a-4216-88cc-a79293d41916-kube-api-access-km7x2\") pod \"09f03aaf-e82a-4216-88cc-a79293d41916\" (UID: \"09f03aaf-e82a-4216-88cc-a79293d41916\") " Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.990215 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f03aaf-e82a-4216-88cc-a79293d41916-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09f03aaf-e82a-4216-88cc-a79293d41916" (UID: "09f03aaf-e82a-4216-88cc-a79293d41916"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:07:12 crc kubenswrapper[4821]: I0309 19:07:12.995002 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f03aaf-e82a-4216-88cc-a79293d41916-kube-api-access-km7x2" (OuterVolumeSpecName: "kube-api-access-km7x2") pod "09f03aaf-e82a-4216-88cc-a79293d41916" (UID: "09f03aaf-e82a-4216-88cc-a79293d41916"). InnerVolumeSpecName "kube-api-access-km7x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091362 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-log-httpd\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091507 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-combined-ca-bundle\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091573 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-sg-core-conf-yaml\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091625 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqs8t\" (UniqueName: \"kubernetes.io/projected/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-kube-api-access-rqs8t\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091690 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091737 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-ceilometer-tls-certs\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.091806 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-run-httpd\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.092385 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.092471 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.092926 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-scripts\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.093989 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km7x2\" (UniqueName: \"kubernetes.io/projected/09f03aaf-e82a-4216-88cc-a79293d41916-kube-api-access-km7x2\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.094026 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.094043 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.094060 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f03aaf-e82a-4216-88cc-a79293d41916-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.096576 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-kube-api-access-rqs8t" (OuterVolumeSpecName: "kube-api-access-rqs8t") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "kube-api-access-rqs8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.101522 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-scripts" (OuterVolumeSpecName: "scripts") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.112902 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.134616 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.160817 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.194546 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data" (OuterVolumeSpecName: "config-data") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.194858 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data\") pod \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\" (UID: \"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83\") " Mar 09 19:07:13 crc kubenswrapper[4821]: W0309 19:07:13.195091 4821 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83/volumes/kubernetes.io~secret/config-data Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195120 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data" (OuterVolumeSpecName: "config-data") pod "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" (UID: "51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195759 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195835 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195859 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqs8t\" (UniqueName: \"kubernetes.io/projected/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-kube-api-access-rqs8t\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195885 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195912 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.195937 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.518588 4821 generic.go:334] "Generic (PLEG): container finished" podID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerID="cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d" exitCode=0 Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.518821 4821 generic.go:334] "Generic (PLEG): container finished" podID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerID="08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd" exitCode=0 Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.518863 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerDied","Data":"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d"} Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.518888 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerDied","Data":"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd"} Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.518898 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83","Type":"ContainerDied","Data":"76879f0ed5eb771b5fb4137deef56f376561a0d94abef12eaa238be2304ee7ff"} Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.518914 4821 scope.go:117] "RemoveContainer" containerID="604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.519018 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.530951 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" event={"ID":"09f03aaf-e82a-4216-88cc-a79293d41916","Type":"ContainerDied","Data":"ed71ca2e12d17e6b9b1a4d81e1547cd832ff0133c48fde09562fe64692b6f086"} Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.531008 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed71ca2e12d17e6b9b1a4d81e1547cd832ff0133c48fde09562fe64692b6f086" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.530968 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-m59cp" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.558971 4821 scope.go:117] "RemoveContainer" containerID="ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.575271 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" path="/var/lib/kubelet/pods/564ea886-2427-45ab-be4a-adf79b21f4d7/volumes" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.576160 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" path="/var/lib/kubelet/pods/8e27cb8c-920a-4141-b783-51bf80dbb332/volumes" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.577004 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.584363 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.591803 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.592360 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f03aaf-e82a-4216-88cc-a79293d41916" containerName="mariadb-account-delete" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.592440 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f03aaf-e82a-4216-88cc-a79293d41916" containerName="mariadb-account-delete" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.592521 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-kuttl-api-log" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.592569 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-kuttl-api-log" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.592619 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-api" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.592679 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-api" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.592746 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-api" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.592809 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-api" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.592875 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="proxy-httpd" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.592977 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="proxy-httpd" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.593155 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff71f3d-8bd3-495b-bf6b-427931799b9d" containerName="watcher-decision-engine" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593209 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff71f3d-8bd3-495b-bf6b-427931799b9d" containerName="watcher-decision-engine" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.593261 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-kuttl-api-log" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593344 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-kuttl-api-log" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.593453 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-notification-agent" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593530 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-notification-agent" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.593586 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="sg-core" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593631 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="sg-core" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.593681 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-central-agent" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593725 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-central-agent" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593977 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-notification-agent" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594064 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="sg-core" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594136 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-api" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594201 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-api" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594273 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f03aaf-e82a-4216-88cc-a79293d41916" containerName="mariadb-account-delete" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594747 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-kuttl-api-log" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594850 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="proxy-httpd" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594909 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff71f3d-8bd3-495b-bf6b-427931799b9d" containerName="watcher-decision-engine" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.594968 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" containerName="ceilometer-central-agent" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.595025 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-kuttl-api-log" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.593129 4821 scope.go:117] "RemoveContainer" containerID="cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.597882 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.607653 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.607725 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.609407 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.613745 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.634466 4821 scope.go:117] "RemoveContainer" containerID="08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656121 4821 scope.go:117] "RemoveContainer" containerID="604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.656473 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e\": container with ID starting with 604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e not found: ID does not exist" containerID="604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656509 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e"} err="failed to get container status \"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e\": rpc error: code = NotFound desc = could not find container \"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e\": container with ID starting with 604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656531 4821 scope.go:117] "RemoveContainer" containerID="ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.656737 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b\": container with ID starting with ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b not found: ID does not exist" containerID="ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656760 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b"} err="failed to get container status \"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b\": rpc error: code = NotFound desc = could not find container \"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b\": container with ID starting with ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656777 4821 scope.go:117] "RemoveContainer" containerID="cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.656953 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d\": container with ID starting with cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d not found: ID does not exist" containerID="cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656977 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d"} err="failed to get container status \"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d\": rpc error: code = NotFound desc = could not find container \"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d\": container with ID starting with cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.656990 4821 scope.go:117] "RemoveContainer" containerID="08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd" Mar 09 19:07:13 crc kubenswrapper[4821]: E0309 19:07:13.657308 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd\": container with ID starting with 08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd not found: ID does not exist" containerID="08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.657377 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd"} err="failed to get container status \"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd\": rpc error: code = NotFound desc = could not find container \"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd\": container with ID starting with 08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.657391 4821 scope.go:117] "RemoveContainer" containerID="604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.657637 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e"} err="failed to get container status \"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e\": rpc error: code = NotFound desc = could not find container \"604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e\": container with ID starting with 604033f07aa0e19d99c1d6ded82e306cbbb260cc7053fb32da305d7a6df5788e not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.657656 4821 scope.go:117] "RemoveContainer" containerID="ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.658674 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b"} err="failed to get container status \"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b\": rpc error: code = NotFound desc = could not find container \"ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b\": container with ID starting with ef86fd9c6ad5d3f815d8002b92c62c3ed1658ee84fde89afea68a47cab4a711b not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.658718 4821 scope.go:117] "RemoveContainer" containerID="cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.659334 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d"} err="failed to get container status \"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d\": rpc error: code = NotFound desc = could not find container \"cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d\": container with ID starting with cb402887b5cb57c7d6270ee5c660ac0992d98f7e9183d616b603d7c22ca83f2d not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.659361 4821 scope.go:117] "RemoveContainer" containerID="08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.659623 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd"} err="failed to get container status \"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd\": rpc error: code = NotFound desc = could not find container \"08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd\": container with ID starting with 08533ae3af6d65afee0828d6c5a5e96a72daa0d3a184f8aa14db89f29fb2ddfd not found: ID does not exist" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712127 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-scripts\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712163 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tcxt\" (UniqueName: \"kubernetes.io/projected/62075a69-e200-43a4-89db-a4842953538d-kube-api-access-7tcxt\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712215 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712232 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-run-httpd\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712286 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712305 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-log-httpd\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712347 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-config-data\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.712363 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813260 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-scripts\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813308 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tcxt\" (UniqueName: \"kubernetes.io/projected/62075a69-e200-43a4-89db-a4842953538d-kube-api-access-7tcxt\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813361 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813378 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-run-httpd\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813447 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813477 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-log-httpd\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813510 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-config-data\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.813532 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.814168 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-run-httpd\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.814250 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-log-httpd\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.821864 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.821883 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.827809 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-config-data\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.819047 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-scripts\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.831928 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.836614 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tcxt\" (UniqueName: \"kubernetes.io/projected/62075a69-e200-43a4-89db-a4842953538d-kube-api-access-7tcxt\") pod \"ceilometer-0\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:13 crc kubenswrapper[4821]: I0309 19:07:13.916002 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.088407 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:14 crc kubenswrapper[4821]: E0309 19:07:14.088880 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25ba8ed_fec0_4e0d_9006_4aef28a83e53.slice/crio-conmon-94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93.scope\": RecentStats: unable to find data in memory cache]" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.089716 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.122387 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-btk74"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.164585 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.170387 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-btk74"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.197387 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-m59cp"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.216417 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-m59cp"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.249353 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-jdcqq"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.264630 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-jdcqq"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.435924 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.542946 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-cert-memcached-mtls\") pod \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.543010 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-combined-ca-bundle\") pod \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.543096 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8pkm\" (UniqueName: \"kubernetes.io/projected/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-kube-api-access-s8pkm\") pod \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.543120 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-logs\") pod \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.543141 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-config-data\") pod \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\" (UID: \"a25ba8ed-fec0-4e0d-9006-4aef28a83e53\") " Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.544649 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-logs" (OuterVolumeSpecName: "logs") pod "a25ba8ed-fec0-4e0d-9006-4aef28a83e53" (UID: "a25ba8ed-fec0-4e0d-9006-4aef28a83e53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.548047 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-kube-api-access-s8pkm" (OuterVolumeSpecName: "kube-api-access-s8pkm") pod "a25ba8ed-fec0-4e0d-9006-4aef28a83e53" (UID: "a25ba8ed-fec0-4e0d-9006-4aef28a83e53"). InnerVolumeSpecName "kube-api-access-s8pkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.577912 4821 generic.go:334] "Generic (PLEG): container finished" podID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" exitCode=0 Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.577969 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a25ba8ed-fec0-4e0d-9006-4aef28a83e53","Type":"ContainerDied","Data":"94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93"} Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.577994 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a25ba8ed-fec0-4e0d-9006-4aef28a83e53","Type":"ContainerDied","Data":"ce3a641c527055dd737c4c248d7cf70d1171be4082da3e47f04e6ca753f48b02"} Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.578010 4821 scope.go:117] "RemoveContainer" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.578101 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.607274 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.618282 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-config-data" (OuterVolumeSpecName: "config-data") pod "a25ba8ed-fec0-4e0d-9006-4aef28a83e53" (UID: "a25ba8ed-fec0-4e0d-9006-4aef28a83e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.629453 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a25ba8ed-fec0-4e0d-9006-4aef28a83e53" (UID: "a25ba8ed-fec0-4e0d-9006-4aef28a83e53"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.629481 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a25ba8ed-fec0-4e0d-9006-4aef28a83e53" (UID: "a25ba8ed-fec0-4e0d-9006-4aef28a83e53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.631995 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.645551 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.645581 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.645634 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8pkm\" (UniqueName: \"kubernetes.io/projected/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-kube-api-access-s8pkm\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.645646 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.645655 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ba8ed-fec0-4e0d-9006-4aef28a83e53-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.695342 4821 scope.go:117] "RemoveContainer" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" Mar 09 19:07:14 crc kubenswrapper[4821]: E0309 19:07:14.695677 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93\": container with ID starting with 94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93 not found: ID does not exist" containerID="94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.695719 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93"} err="failed to get container status \"94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93\": rpc error: code = NotFound desc = could not find container \"94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93\": container with ID starting with 94ef77b601843f18be5b8e4a6ea6f7d8dd433cd1df13ff0a49039a9f6a9b8a93 not found: ID does not exist" Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.914402 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:07:14 crc kubenswrapper[4821]: I0309 19:07:14.923057 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.431553 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.233:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.432010 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="564ea886-2427-45ab-be4a-adf79b21f4d7" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.233:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.551802 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.235:9322/\": dial tcp 10.217.0.235:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.551841 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8e27cb8c-920a-4141-b783-51bf80dbb332" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.235:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.562007 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f03aaf-e82a-4216-88cc-a79293d41916" path="/var/lib/kubelet/pods/09f03aaf-e82a-4216-88cc-a79293d41916/volumes" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.562725 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e107db2-5948-4a60-9745-59aae128e9b6" path="/var/lib/kubelet/pods/2e107db2-5948-4a60-9745-59aae128e9b6/volumes" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.563232 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83" path="/var/lib/kubelet/pods/51cb0cfb-c1c0-488d-bf4e-a9d1f6993b83/volumes" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.564440 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" path="/var/lib/kubelet/pods/a25ba8ed-fec0-4e0d-9006-4aef28a83e53/volumes" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.564915 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4de929b-9a1a-4d74-a3c0-06bfea05f227" path="/var/lib/kubelet/pods/b4de929b-9a1a-4d74-a3c0-06bfea05f227/volumes" Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.590958 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerStarted","Data":"ad240f5ed213d46e31676dda2c0a1445189c5fa7321dbe115428d7e672568000"} Mar 09 19:07:15 crc kubenswrapper[4821]: I0309 19:07:15.591006 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerStarted","Data":"b838dc4bbd16a456310f11449cd5cce2af9e91b57efb755ba3eed31fa77656bf"} Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.602586 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerStarted","Data":"cd72c32ad2c5e7478fe077f9a0b00d9312bead2acc4233b92a02606ed69d6d69"} Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.603297 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerStarted","Data":"25b3d1ebb730fc197b3632fdf1567f9beb55e7d616b4eb869cc1e30454245356"} Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.754761 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-nsd9f"] Mar 09 19:07:16 crc kubenswrapper[4821]: E0309 19:07:16.755280 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" containerName="watcher-applier" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.755367 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" containerName="watcher-applier" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.755579 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25ba8ed-fec0-4e0d-9006-4aef28a83e53" containerName="watcher-applier" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.756121 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.769210 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nsd9f"] Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.786899 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz"] Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.787949 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.792677 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.814216 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz"] Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.877338 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn9pz\" (UniqueName: \"kubernetes.io/projected/5c126569-ec50-4fa4-b063-1ddad5932f62-kube-api-access-nn9pz\") pod \"watcher-3f6e-account-create-update-fbhkz\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.877597 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c126569-ec50-4fa4-b063-1ddad5932f62-operator-scripts\") pod \"watcher-3f6e-account-create-update-fbhkz\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.877706 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2pr4\" (UniqueName: \"kubernetes.io/projected/90e1deb8-31e8-436b-a590-e4befb1e61da-kube-api-access-q2pr4\") pod \"watcher-db-create-nsd9f\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.877833 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e1deb8-31e8-436b-a590-e4befb1e61da-operator-scripts\") pod \"watcher-db-create-nsd9f\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.978992 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e1deb8-31e8-436b-a590-e4befb1e61da-operator-scripts\") pod \"watcher-db-create-nsd9f\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.979239 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn9pz\" (UniqueName: \"kubernetes.io/projected/5c126569-ec50-4fa4-b063-1ddad5932f62-kube-api-access-nn9pz\") pod \"watcher-3f6e-account-create-update-fbhkz\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.979432 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c126569-ec50-4fa4-b063-1ddad5932f62-operator-scripts\") pod \"watcher-3f6e-account-create-update-fbhkz\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.979542 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2pr4\" (UniqueName: \"kubernetes.io/projected/90e1deb8-31e8-436b-a590-e4befb1e61da-kube-api-access-q2pr4\") pod \"watcher-db-create-nsd9f\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.979774 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e1deb8-31e8-436b-a590-e4befb1e61da-operator-scripts\") pod \"watcher-db-create-nsd9f\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:16 crc kubenswrapper[4821]: I0309 19:07:16.980348 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c126569-ec50-4fa4-b063-1ddad5932f62-operator-scripts\") pod \"watcher-3f6e-account-create-update-fbhkz\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.003069 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn9pz\" (UniqueName: \"kubernetes.io/projected/5c126569-ec50-4fa4-b063-1ddad5932f62-kube-api-access-nn9pz\") pod \"watcher-3f6e-account-create-update-fbhkz\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.023270 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2pr4\" (UniqueName: \"kubernetes.io/projected/90e1deb8-31e8-436b-a590-e4befb1e61da-kube-api-access-q2pr4\") pod \"watcher-db-create-nsd9f\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.080246 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.115714 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.474580 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz"] Mar 09 19:07:17 crc kubenswrapper[4821]: W0309 19:07:17.480063 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c126569_ec50_4fa4_b063_1ddad5932f62.slice/crio-61902017ce74ed53922a0295b85dad2af771dec836509eb6e3e45ada9e8e4360 WatchSource:0}: Error finding container 61902017ce74ed53922a0295b85dad2af771dec836509eb6e3e45ada9e8e4360: Status 404 returned error can't find the container with id 61902017ce74ed53922a0295b85dad2af771dec836509eb6e3e45ada9e8e4360 Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.661106 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nsd9f"] Mar 09 19:07:17 crc kubenswrapper[4821]: I0309 19:07:17.664533 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" event={"ID":"5c126569-ec50-4fa4-b063-1ddad5932f62","Type":"ContainerStarted","Data":"61902017ce74ed53922a0295b85dad2af771dec836509eb6e3e45ada9e8e4360"} Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.677174 4821 generic.go:334] "Generic (PLEG): container finished" podID="5c126569-ec50-4fa4-b063-1ddad5932f62" containerID="456f0ed015285e04c1ff22745fe7aaaa47e3c304dd4925e2d12dfb1eead04364" exitCode=0 Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.677253 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" event={"ID":"5c126569-ec50-4fa4-b063-1ddad5932f62","Type":"ContainerDied","Data":"456f0ed015285e04c1ff22745fe7aaaa47e3c304dd4925e2d12dfb1eead04364"} Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.681399 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerStarted","Data":"bf80002be6943f4f05f1052c387cf53477c7f8df6c13eef9ddcbde24a39cebcb"} Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.681675 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.683250 4821 generic.go:334] "Generic (PLEG): container finished" podID="90e1deb8-31e8-436b-a590-e4befb1e61da" containerID="9deefe21c8b2daf93d4a916dc2ba636c9627a32db28741fe4aae640707043b60" exitCode=0 Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.683315 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-nsd9f" event={"ID":"90e1deb8-31e8-436b-a590-e4befb1e61da","Type":"ContainerDied","Data":"9deefe21c8b2daf93d4a916dc2ba636c9627a32db28741fe4aae640707043b60"} Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.683399 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-nsd9f" event={"ID":"90e1deb8-31e8-436b-a590-e4befb1e61da","Type":"ContainerStarted","Data":"adafdc9e06d7c481150186cb7f614c5616a1fff6207f1a1d1577141bd7a94b66"} Mar 09 19:07:18 crc kubenswrapper[4821]: I0309 19:07:18.747988 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.153121899 podStartE2EDuration="5.747957664s" podCreationTimestamp="2026-03-09 19:07:13 +0000 UTC" firstStartedPulling="2026-03-09 19:07:14.629610229 +0000 UTC m=+2571.790986085" lastFinishedPulling="2026-03-09 19:07:18.224445974 +0000 UTC m=+2575.385821850" observedRunningTime="2026-03-09 19:07:18.738540078 +0000 UTC m=+2575.899915944" watchObservedRunningTime="2026-03-09 19:07:18.747957664 +0000 UTC m=+2575.909333530" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.069078 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.093173 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.136417 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn9pz\" (UniqueName: \"kubernetes.io/projected/5c126569-ec50-4fa4-b063-1ddad5932f62-kube-api-access-nn9pz\") pod \"5c126569-ec50-4fa4-b063-1ddad5932f62\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.136526 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2pr4\" (UniqueName: \"kubernetes.io/projected/90e1deb8-31e8-436b-a590-e4befb1e61da-kube-api-access-q2pr4\") pod \"90e1deb8-31e8-436b-a590-e4befb1e61da\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.136610 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e1deb8-31e8-436b-a590-e4befb1e61da-operator-scripts\") pod \"90e1deb8-31e8-436b-a590-e4befb1e61da\" (UID: \"90e1deb8-31e8-436b-a590-e4befb1e61da\") " Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.136661 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c126569-ec50-4fa4-b063-1ddad5932f62-operator-scripts\") pod \"5c126569-ec50-4fa4-b063-1ddad5932f62\" (UID: \"5c126569-ec50-4fa4-b063-1ddad5932f62\") " Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.137342 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e1deb8-31e8-436b-a590-e4befb1e61da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90e1deb8-31e8-436b-a590-e4befb1e61da" (UID: "90e1deb8-31e8-436b-a590-e4befb1e61da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.137391 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c126569-ec50-4fa4-b063-1ddad5932f62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c126569-ec50-4fa4-b063-1ddad5932f62" (UID: "5c126569-ec50-4fa4-b063-1ddad5932f62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.142265 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e1deb8-31e8-436b-a590-e4befb1e61da-kube-api-access-q2pr4" (OuterVolumeSpecName: "kube-api-access-q2pr4") pod "90e1deb8-31e8-436b-a590-e4befb1e61da" (UID: "90e1deb8-31e8-436b-a590-e4befb1e61da"). InnerVolumeSpecName "kube-api-access-q2pr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.142393 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c126569-ec50-4fa4-b063-1ddad5932f62-kube-api-access-nn9pz" (OuterVolumeSpecName: "kube-api-access-nn9pz") pod "5c126569-ec50-4fa4-b063-1ddad5932f62" (UID: "5c126569-ec50-4fa4-b063-1ddad5932f62"). InnerVolumeSpecName "kube-api-access-nn9pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.238719 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90e1deb8-31e8-436b-a590-e4befb1e61da-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.238784 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c126569-ec50-4fa4-b063-1ddad5932f62-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.238798 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn9pz\" (UniqueName: \"kubernetes.io/projected/5c126569-ec50-4fa4-b063-1ddad5932f62-kube-api-access-nn9pz\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.238814 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2pr4\" (UniqueName: \"kubernetes.io/projected/90e1deb8-31e8-436b-a590-e4befb1e61da-kube-api-access-q2pr4\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.705964 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" event={"ID":"5c126569-ec50-4fa4-b063-1ddad5932f62","Type":"ContainerDied","Data":"61902017ce74ed53922a0295b85dad2af771dec836509eb6e3e45ada9e8e4360"} Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.706301 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61902017ce74ed53922a0295b85dad2af771dec836509eb6e3e45ada9e8e4360" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.706418 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.711450 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-nsd9f" event={"ID":"90e1deb8-31e8-436b-a590-e4befb1e61da","Type":"ContainerDied","Data":"adafdc9e06d7c481150186cb7f614c5616a1fff6207f1a1d1577141bd7a94b66"} Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.711644 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adafdc9e06d7c481150186cb7f614c5616a1fff6207f1a1d1577141bd7a94b66" Mar 09 19:07:20 crc kubenswrapper[4821]: I0309 19:07:20.711523 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-nsd9f" Mar 09 19:07:21 crc kubenswrapper[4821]: I0309 19:07:21.935316 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z7mt"] Mar 09 19:07:21 crc kubenswrapper[4821]: I0309 19:07:21.935589 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8z7mt" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="registry-server" containerID="cri-o://efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641" gracePeriod=2 Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.024587 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv"] Mar 09 19:07:22 crc kubenswrapper[4821]: E0309 19:07:22.024954 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e1deb8-31e8-436b-a590-e4befb1e61da" containerName="mariadb-database-create" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.024975 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e1deb8-31e8-436b-a590-e4befb1e61da" containerName="mariadb-database-create" Mar 09 19:07:22 crc kubenswrapper[4821]: E0309 19:07:22.024991 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c126569-ec50-4fa4-b063-1ddad5932f62" containerName="mariadb-account-create-update" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.024998 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c126569-ec50-4fa4-b063-1ddad5932f62" containerName="mariadb-account-create-update" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.025370 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e1deb8-31e8-436b-a590-e4befb1e61da" containerName="mariadb-database-create" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.025388 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c126569-ec50-4fa4-b063-1ddad5932f62" containerName="mariadb-account-create-update" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.025912 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.028283 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.034102 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-dtbkv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.035989 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv"] Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.070301 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlt7s\" (UniqueName: \"kubernetes.io/projected/ac1bca1d-c800-411b-aa6f-f71c343914ea-kube-api-access-xlt7s\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.070371 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-config-data\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.070393 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.070537 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-db-sync-config-data\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.172419 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-db-sync-config-data\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.172477 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlt7s\" (UniqueName: \"kubernetes.io/projected/ac1bca1d-c800-411b-aa6f-f71c343914ea-kube-api-access-xlt7s\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.172503 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-config-data\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.172519 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.184260 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-db-sync-config-data\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.188542 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-config-data\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.193002 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.203733 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlt7s\" (UniqueName: \"kubernetes.io/projected/ac1bca1d-c800-411b-aa6f-f71c343914ea-kube-api-access-xlt7s\") pod \"watcher-kuttl-db-sync-6hlvv\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.350517 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.375526 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzzv2\" (UniqueName: \"kubernetes.io/projected/6d88f413-e5b1-4102-837f-fa3f5ca953f0-kube-api-access-kzzv2\") pod \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.375638 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-utilities\") pod \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.375711 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-catalog-content\") pod \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\" (UID: \"6d88f413-e5b1-4102-837f-fa3f5ca953f0\") " Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.378161 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-utilities" (OuterVolumeSpecName: "utilities") pod "6d88f413-e5b1-4102-837f-fa3f5ca953f0" (UID: "6d88f413-e5b1-4102-837f-fa3f5ca953f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.382462 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d88f413-e5b1-4102-837f-fa3f5ca953f0-kube-api-access-kzzv2" (OuterVolumeSpecName: "kube-api-access-kzzv2") pod "6d88f413-e5b1-4102-837f-fa3f5ca953f0" (UID: "6d88f413-e5b1-4102-837f-fa3f5ca953f0"). InnerVolumeSpecName "kube-api-access-kzzv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.392640 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.444221 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d88f413-e5b1-4102-837f-fa3f5ca953f0" (UID: "6d88f413-e5b1-4102-837f-fa3f5ca953f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.480260 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.480300 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d88f413-e5b1-4102-837f-fa3f5ca953f0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.480328 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzzv2\" (UniqueName: \"kubernetes.io/projected/6d88f413-e5b1-4102-837f-fa3f5ca953f0-kube-api-access-kzzv2\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.735270 4821 generic.go:334] "Generic (PLEG): container finished" podID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerID="efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641" exitCode=0 Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.735595 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerDied","Data":"efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641"} Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.735622 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8z7mt" event={"ID":"6d88f413-e5b1-4102-837f-fa3f5ca953f0","Type":"ContainerDied","Data":"6d1f9cef5ab3978d25f6df06b34fa3521e43a865ab605ce487829634fa274f45"} Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.735649 4821 scope.go:117] "RemoveContainer" containerID="efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.735804 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8z7mt" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.790577 4821 scope.go:117] "RemoveContainer" containerID="21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.798264 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8z7mt"] Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.807081 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8z7mt"] Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.833047 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv"] Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.834554 4821 scope.go:117] "RemoveContainer" containerID="161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae" Mar 09 19:07:22 crc kubenswrapper[4821]: W0309 19:07:22.843758 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1bca1d_c800_411b_aa6f_f71c343914ea.slice/crio-eeacaa01ada76ecaa90de67c6176fae28e4e266a1866a54131b2f95295016095 WatchSource:0}: Error finding container eeacaa01ada76ecaa90de67c6176fae28e4e266a1866a54131b2f95295016095: Status 404 returned error can't find the container with id eeacaa01ada76ecaa90de67c6176fae28e4e266a1866a54131b2f95295016095 Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.863844 4821 scope.go:117] "RemoveContainer" containerID="efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641" Mar 09 19:07:22 crc kubenswrapper[4821]: E0309 19:07:22.867246 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641\": container with ID starting with efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641 not found: ID does not exist" containerID="efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.867282 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641"} err="failed to get container status \"efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641\": rpc error: code = NotFound desc = could not find container \"efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641\": container with ID starting with efac5be8c60745ca8aee14935470b899d572ba591f57228a7430f78dc6e32641 not found: ID does not exist" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.867311 4821 scope.go:117] "RemoveContainer" containerID="21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c" Mar 09 19:07:22 crc kubenswrapper[4821]: E0309 19:07:22.868839 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c\": container with ID starting with 21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c not found: ID does not exist" containerID="21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.868883 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c"} err="failed to get container status \"21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c\": rpc error: code = NotFound desc = could not find container \"21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c\": container with ID starting with 21d8a295b5778128d70554f1fb1a0900b8e2eba57f83edff9c2d2a945ac6554c not found: ID does not exist" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.868908 4821 scope.go:117] "RemoveContainer" containerID="161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae" Mar 09 19:07:22 crc kubenswrapper[4821]: E0309 19:07:22.870295 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae\": container with ID starting with 161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae not found: ID does not exist" containerID="161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae" Mar 09 19:07:22 crc kubenswrapper[4821]: I0309 19:07:22.870377 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae"} err="failed to get container status \"161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae\": rpc error: code = NotFound desc = could not find container \"161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae\": container with ID starting with 161b40ec25597a447c41a31988157f953f636093ae52e0d5dbfa3c3bc5fbbcae not found: ID does not exist" Mar 09 19:07:23 crc kubenswrapper[4821]: I0309 19:07:23.561005 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" path="/var/lib/kubelet/pods/6d88f413-e5b1-4102-837f-fa3f5ca953f0/volumes" Mar 09 19:07:23 crc kubenswrapper[4821]: I0309 19:07:23.750280 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" event={"ID":"ac1bca1d-c800-411b-aa6f-f71c343914ea","Type":"ContainerStarted","Data":"36b96214bc8c4009f78f2d497eb2dc2c85e0e456b77a86e7700d8e1f03871af1"} Mar 09 19:07:23 crc kubenswrapper[4821]: I0309 19:07:23.750347 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" event={"ID":"ac1bca1d-c800-411b-aa6f-f71c343914ea","Type":"ContainerStarted","Data":"eeacaa01ada76ecaa90de67c6176fae28e4e266a1866a54131b2f95295016095"} Mar 09 19:07:23 crc kubenswrapper[4821]: I0309 19:07:23.766894 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" podStartSLOduration=1.766875865 podStartE2EDuration="1.766875865s" podCreationTimestamp="2026-03-09 19:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:07:23.765777366 +0000 UTC m=+2580.927153222" watchObservedRunningTime="2026-03-09 19:07:23.766875865 +0000 UTC m=+2580.928251721" Mar 09 19:07:25 crc kubenswrapper[4821]: I0309 19:07:25.769875 4821 generic.go:334] "Generic (PLEG): container finished" podID="ac1bca1d-c800-411b-aa6f-f71c343914ea" containerID="36b96214bc8c4009f78f2d497eb2dc2c85e0e456b77a86e7700d8e1f03871af1" exitCode=0 Mar 09 19:07:25 crc kubenswrapper[4821]: I0309 19:07:25.770068 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" event={"ID":"ac1bca1d-c800-411b-aa6f-f71c343914ea","Type":"ContainerDied","Data":"36b96214bc8c4009f78f2d497eb2dc2c85e0e456b77a86e7700d8e1f03871af1"} Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.131275 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.284187 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-db-sync-config-data\") pod \"ac1bca1d-c800-411b-aa6f-f71c343914ea\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.284250 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlt7s\" (UniqueName: \"kubernetes.io/projected/ac1bca1d-c800-411b-aa6f-f71c343914ea-kube-api-access-xlt7s\") pod \"ac1bca1d-c800-411b-aa6f-f71c343914ea\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.284440 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-config-data\") pod \"ac1bca1d-c800-411b-aa6f-f71c343914ea\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.285418 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-combined-ca-bundle\") pod \"ac1bca1d-c800-411b-aa6f-f71c343914ea\" (UID: \"ac1bca1d-c800-411b-aa6f-f71c343914ea\") " Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.296124 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1bca1d-c800-411b-aa6f-f71c343914ea-kube-api-access-xlt7s" (OuterVolumeSpecName: "kube-api-access-xlt7s") pod "ac1bca1d-c800-411b-aa6f-f71c343914ea" (UID: "ac1bca1d-c800-411b-aa6f-f71c343914ea"). InnerVolumeSpecName "kube-api-access-xlt7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.296411 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ac1bca1d-c800-411b-aa6f-f71c343914ea" (UID: "ac1bca1d-c800-411b-aa6f-f71c343914ea"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.335904 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac1bca1d-c800-411b-aa6f-f71c343914ea" (UID: "ac1bca1d-c800-411b-aa6f-f71c343914ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.342994 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-config-data" (OuterVolumeSpecName: "config-data") pod "ac1bca1d-c800-411b-aa6f-f71c343914ea" (UID: "ac1bca1d-c800-411b-aa6f-f71c343914ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.387196 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.387237 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.387251 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bca1d-c800-411b-aa6f-f71c343914ea-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.387263 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlt7s\" (UniqueName: \"kubernetes.io/projected/ac1bca1d-c800-411b-aa6f-f71c343914ea-kube-api-access-xlt7s\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.552198 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:07:27 crc kubenswrapper[4821]: E0309 19:07:27.552702 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.793713 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" event={"ID":"ac1bca1d-c800-411b-aa6f-f71c343914ea","Type":"ContainerDied","Data":"eeacaa01ada76ecaa90de67c6176fae28e4e266a1866a54131b2f95295016095"} Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.793772 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeacaa01ada76ecaa90de67c6176fae28e4e266a1866a54131b2f95295016095" Mar 09 19:07:27 crc kubenswrapper[4821]: I0309 19:07:27.793776 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.155330 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: E0309 19:07:28.155706 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="extract-content" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.155721 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="extract-content" Mar 09 19:07:28 crc kubenswrapper[4821]: E0309 19:07:28.155746 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="registry-server" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.155753 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="registry-server" Mar 09 19:07:28 crc kubenswrapper[4821]: E0309 19:07:28.155766 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1bca1d-c800-411b-aa6f-f71c343914ea" containerName="watcher-kuttl-db-sync" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.155775 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1bca1d-c800-411b-aa6f-f71c343914ea" containerName="watcher-kuttl-db-sync" Mar 09 19:07:28 crc kubenswrapper[4821]: E0309 19:07:28.155799 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="extract-utilities" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.155808 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="extract-utilities" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.155983 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d88f413-e5b1-4102-837f-fa3f5ca953f0" containerName="registry-server" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.156003 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1bca1d-c800-411b-aa6f-f71c343914ea" containerName="watcher-kuttl-db-sync" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.156671 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.160091 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.160400 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-dtbkv" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.173208 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.200588 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.200635 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.200699 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.200753 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.200779 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a553e4f7-fcde-41f9-9a67-c319c2848109-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.200806 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6vz\" (UniqueName: \"kubernetes.io/projected/a553e4f7-fcde-41f9-9a67-c319c2848109-kube-api-access-wq6vz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.234238 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.235478 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.238475 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303197 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303472 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303560 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfl9\" (UniqueName: \"kubernetes.io/projected/b64e7401-cb9d-41b0-bdd2-59b43c383583-kube-api-access-ftfl9\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303669 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303735 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a553e4f7-fcde-41f9-9a67-c319c2848109-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303808 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6vz\" (UniqueName: \"kubernetes.io/projected/a553e4f7-fcde-41f9-9a67-c319c2848109-kube-api-access-wq6vz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303891 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.303964 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.304039 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.304105 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64e7401-cb9d-41b0-bdd2-59b43c383583-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.304181 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.304249 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.305161 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.306300 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a553e4f7-fcde-41f9-9a67-c319c2848109-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.314965 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.317404 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.324124 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.324974 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.340843 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.341814 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.358705 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6vz\" (UniqueName: \"kubernetes.io/projected/a553e4f7-fcde-41f9-9a67-c319c2848109-kube-api-access-wq6vz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.358747 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.377465 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410168 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkrx\" (UniqueName: \"kubernetes.io/projected/1bf3043c-0996-4743-9d7c-059b18df0896-kube-api-access-vmkrx\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410228 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410265 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfl9\" (UniqueName: \"kubernetes.io/projected/b64e7401-cb9d-41b0-bdd2-59b43c383583-kube-api-access-ftfl9\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410307 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410343 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410365 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410381 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410412 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410430 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf3043c-0996-4743-9d7c-059b18df0896-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410447 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64e7401-cb9d-41b0-bdd2-59b43c383583-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.410484 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.412111 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64e7401-cb9d-41b0-bdd2-59b43c383583-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.413480 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.416944 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.418084 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.430596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfl9\" (UniqueName: \"kubernetes.io/projected/b64e7401-cb9d-41b0-bdd2-59b43c383583-kube-api-access-ftfl9\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.435018 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.473000 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.512308 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.512404 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf3043c-0996-4743-9d7c-059b18df0896-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.512491 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.512553 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkrx\" (UniqueName: \"kubernetes.io/projected/1bf3043c-0996-4743-9d7c-059b18df0896-kube-api-access-vmkrx\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.512658 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.512843 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf3043c-0996-4743-9d7c-059b18df0896-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.516108 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.516164 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.516554 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.527998 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkrx\" (UniqueName: \"kubernetes.io/projected/1bf3043c-0996-4743-9d7c-059b18df0896-kube-api-access-vmkrx\") pod \"watcher-kuttl-applier-0\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.548636 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.788223 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.848180 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: W0309 19:07:28.852389 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb64e7401_cb9d_41b0_bdd2_59b43c383583.slice/crio-59e1ec0a4461b34c61fcc205139129ae79ae905bfeb7ceaa738b0aab37bba175 WatchSource:0}: Error finding container 59e1ec0a4461b34c61fcc205139129ae79ae905bfeb7ceaa738b0aab37bba175: Status 404 returned error can't find the container with id 59e1ec0a4461b34c61fcc205139129ae79ae905bfeb7ceaa738b0aab37bba175 Mar 09 19:07:28 crc kubenswrapper[4821]: I0309 19:07:28.950883 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:07:28 crc kubenswrapper[4821]: W0309 19:07:28.985640 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda553e4f7_fcde_41f9_9a67_c319c2848109.slice/crio-d0a067434bba825d633043e89603d0a2c5337307050e96e37f34e5bd62c3f9ab WatchSource:0}: Error finding container d0a067434bba825d633043e89603d0a2c5337307050e96e37f34e5bd62c3f9ab: Status 404 returned error can't find the container with id d0a067434bba825d633043e89603d0a2c5337307050e96e37f34e5bd62c3f9ab Mar 09 19:07:29 crc kubenswrapper[4821]: W0309 19:07:29.269984 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf3043c_0996_4743_9d7c_059b18df0896.slice/crio-931953a041b49e2cff6c0324dbca22d4ec18a8980d6ee2b3ff16a9a589c1b205 WatchSource:0}: Error finding container 931953a041b49e2cff6c0324dbca22d4ec18a8980d6ee2b3ff16a9a589c1b205: Status 404 returned error can't find the container with id 931953a041b49e2cff6c0324dbca22d4ec18a8980d6ee2b3ff16a9a589c1b205 Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.272254 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.812981 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"a553e4f7-fcde-41f9-9a67-c319c2848109","Type":"ContainerStarted","Data":"6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.813363 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"a553e4f7-fcde-41f9-9a67-c319c2848109","Type":"ContainerStarted","Data":"d0a067434bba825d633043e89603d0a2c5337307050e96e37f34e5bd62c3f9ab"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.815105 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1bf3043c-0996-4743-9d7c-059b18df0896","Type":"ContainerStarted","Data":"6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.815141 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1bf3043c-0996-4743-9d7c-059b18df0896","Type":"ContainerStarted","Data":"931953a041b49e2cff6c0324dbca22d4ec18a8980d6ee2b3ff16a9a589c1b205"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.817702 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b64e7401-cb9d-41b0-bdd2-59b43c383583","Type":"ContainerStarted","Data":"196a17baca99aecdfc51292276e233d789379c5e57f8fa343f555ad91a51164e"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.817739 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b64e7401-cb9d-41b0-bdd2-59b43c383583","Type":"ContainerStarted","Data":"5e12805ad9e8d22c553a868ab73584badea31e5e9ed98bc2df0dcd6d9b962297"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.817752 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b64e7401-cb9d-41b0-bdd2-59b43c383583","Type":"ContainerStarted","Data":"59e1ec0a4461b34c61fcc205139129ae79ae905bfeb7ceaa738b0aab37bba175"} Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.817946 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.835993 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.8359687820000001 podStartE2EDuration="1.835968782s" podCreationTimestamp="2026-03-09 19:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:07:29.832269581 +0000 UTC m=+2586.993645477" watchObservedRunningTime="2026-03-09 19:07:29.835968782 +0000 UTC m=+2586.997344658" Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.883271 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.883246614 podStartE2EDuration="1.883246614s" podCreationTimestamp="2026-03-09 19:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:07:29.878996778 +0000 UTC m=+2587.040372634" watchObservedRunningTime="2026-03-09 19:07:29.883246614 +0000 UTC m=+2587.044622490" Mar 09 19:07:29 crc kubenswrapper[4821]: I0309 19:07:29.886542 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.886529873 podStartE2EDuration="1.886529873s" podCreationTimestamp="2026-03-09 19:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:07:29.859667344 +0000 UTC m=+2587.021043200" watchObservedRunningTime="2026-03-09 19:07:29.886529873 +0000 UTC m=+2587.047905739" Mar 09 19:07:30 crc kubenswrapper[4821]: I0309 19:07:30.507971 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:31 crc kubenswrapper[4821]: I0309 19:07:31.673475 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:32 crc kubenswrapper[4821]: I0309 19:07:32.108331 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:32 crc kubenswrapper[4821]: I0309 19:07:32.884538 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:33 crc kubenswrapper[4821]: I0309 19:07:33.549139 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:33 crc kubenswrapper[4821]: I0309 19:07:33.789245 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:34 crc kubenswrapper[4821]: I0309 19:07:34.117630 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:35 crc kubenswrapper[4821]: I0309 19:07:35.341859 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:36 crc kubenswrapper[4821]: I0309 19:07:36.578535 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:37 crc kubenswrapper[4821]: I0309 19:07:37.817984 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.473948 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.514047 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.549777 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.611772 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.789057 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.813804 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.913036 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.921630 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.934695 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:07:38 crc kubenswrapper[4821]: I0309 19:07:38.943330 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:07:39 crc kubenswrapper[4821]: I0309 19:07:39.052358 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:39 crc kubenswrapper[4821]: I0309 19:07:39.555145 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:07:39 crc kubenswrapper[4821]: E0309 19:07:39.555469 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.243775 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.510037 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.863215 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-vl4gk"] Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.864507 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.870809 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-vl4gk"] Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.954897 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-1259-account-create-update-wb9tr"] Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.956072 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.958193 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Mar 09 19:07:40 crc kubenswrapper[4821]: I0309 19:07:40.969833 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-1259-account-create-update-wb9tr"] Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.012447 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-operator-scripts\") pod \"cinder-db-create-vl4gk\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.012834 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67gl\" (UniqueName: \"kubernetes.io/projected/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-kube-api-access-s67gl\") pod \"cinder-db-create-vl4gk\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.115072 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-operator-scripts\") pod \"cinder-1259-account-create-update-wb9tr\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.115928 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-operator-scripts\") pod \"cinder-db-create-vl4gk\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.116045 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88znk\" (UniqueName: \"kubernetes.io/projected/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-kube-api-access-88znk\") pod \"cinder-1259-account-create-update-wb9tr\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.116208 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s67gl\" (UniqueName: \"kubernetes.io/projected/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-kube-api-access-s67gl\") pod \"cinder-db-create-vl4gk\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.116812 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-operator-scripts\") pod \"cinder-db-create-vl4gk\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.139034 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s67gl\" (UniqueName: \"kubernetes.io/projected/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-kube-api-access-s67gl\") pod \"cinder-db-create-vl4gk\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.191927 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.218167 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88znk\" (UniqueName: \"kubernetes.io/projected/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-kube-api-access-88znk\") pod \"cinder-1259-account-create-update-wb9tr\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.218291 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-operator-scripts\") pod \"cinder-1259-account-create-update-wb9tr\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.219014 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-operator-scripts\") pod \"cinder-1259-account-create-update-wb9tr\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.237357 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88znk\" (UniqueName: \"kubernetes.io/projected/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-kube-api-access-88znk\") pod \"cinder-1259-account-create-update-wb9tr\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.250767 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.251046 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-central-agent" containerID="cri-o://ad240f5ed213d46e31676dda2c0a1445189c5fa7321dbe115428d7e672568000" gracePeriod=30 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.251183 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="proxy-httpd" containerID="cri-o://bf80002be6943f4f05f1052c387cf53477c7f8df6c13eef9ddcbde24a39cebcb" gracePeriod=30 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.251218 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="sg-core" containerID="cri-o://cd72c32ad2c5e7478fe077f9a0b00d9312bead2acc4233b92a02606ed69d6d69" gracePeriod=30 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.251248 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-notification-agent" containerID="cri-o://25b3d1ebb730fc197b3632fdf1567f9beb55e7d616b4eb869cc1e30454245356" gracePeriod=30 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.269772 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.270729 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.240:3000/\": EOF" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.704762 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.944386 4821 generic.go:334] "Generic (PLEG): container finished" podID="62075a69-e200-43a4-89db-a4842953538d" containerID="bf80002be6943f4f05f1052c387cf53477c7f8df6c13eef9ddcbde24a39cebcb" exitCode=0 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.944411 4821 generic.go:334] "Generic (PLEG): container finished" podID="62075a69-e200-43a4-89db-a4842953538d" containerID="cd72c32ad2c5e7478fe077f9a0b00d9312bead2acc4233b92a02606ed69d6d69" exitCode=2 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.944431 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerDied","Data":"bf80002be6943f4f05f1052c387cf53477c7f8df6c13eef9ddcbde24a39cebcb"} Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.944455 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerDied","Data":"cd72c32ad2c5e7478fe077f9a0b00d9312bead2acc4233b92a02606ed69d6d69"} Mar 09 19:07:41 crc kubenswrapper[4821]: W0309 19:07:41.950562 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77bdbf36_96d6_447f_bec3_fa2cf37efc1f.slice/crio-a6dfc85d989e6299cd4b390ac7b88c2f3ed9023e1ccdb0862c7b51d7bafd8345 WatchSource:0}: Error finding container a6dfc85d989e6299cd4b390ac7b88c2f3ed9023e1ccdb0862c7b51d7bafd8345: Status 404 returned error can't find the container with id a6dfc85d989e6299cd4b390ac7b88c2f3ed9023e1ccdb0862c7b51d7bafd8345 Mar 09 19:07:41 crc kubenswrapper[4821]: I0309 19:07:41.951286 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-vl4gk"] Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.053843 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-1259-account-create-update-wb9tr"] Mar 09 19:07:42 crc kubenswrapper[4821]: W0309 19:07:42.060626 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a3a05e0_575d_45b2_9f8d_9ee5136aee47.slice/crio-0e4ad98a651682294cb94a596be5694c779a06685274e29ec650cbb4c5a3d56e WatchSource:0}: Error finding container 0e4ad98a651682294cb94a596be5694c779a06685274e29ec650cbb4c5a3d56e: Status 404 returned error can't find the container with id 0e4ad98a651682294cb94a596be5694c779a06685274e29ec650cbb4c5a3d56e Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.936623 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.955265 4821 generic.go:334] "Generic (PLEG): container finished" podID="9a3a05e0-575d-45b2-9f8d-9ee5136aee47" containerID="0b30bfbff353206030a133a07eba1c4c2a0e3f7a1e3d1760e708e77247e9906b" exitCode=0 Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.955344 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" event={"ID":"9a3a05e0-575d-45b2-9f8d-9ee5136aee47","Type":"ContainerDied","Data":"0b30bfbff353206030a133a07eba1c4c2a0e3f7a1e3d1760e708e77247e9906b"} Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.955370 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" event={"ID":"9a3a05e0-575d-45b2-9f8d-9ee5136aee47","Type":"ContainerStarted","Data":"0e4ad98a651682294cb94a596be5694c779a06685274e29ec650cbb4c5a3d56e"} Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.958847 4821 generic.go:334] "Generic (PLEG): container finished" podID="77bdbf36-96d6-447f-bec3-fa2cf37efc1f" containerID="ce0b75c36ae2656c4b6035393b9d51fa35638e5e097c73e14d93b3f3dc81581d" exitCode=0 Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.958951 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-vl4gk" event={"ID":"77bdbf36-96d6-447f-bec3-fa2cf37efc1f","Type":"ContainerDied","Data":"ce0b75c36ae2656c4b6035393b9d51fa35638e5e097c73e14d93b3f3dc81581d"} Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.958979 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-vl4gk" event={"ID":"77bdbf36-96d6-447f-bec3-fa2cf37efc1f","Type":"ContainerStarted","Data":"a6dfc85d989e6299cd4b390ac7b88c2f3ed9023e1ccdb0862c7b51d7bafd8345"} Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.961254 4821 generic.go:334] "Generic (PLEG): container finished" podID="62075a69-e200-43a4-89db-a4842953538d" containerID="25b3d1ebb730fc197b3632fdf1567f9beb55e7d616b4eb869cc1e30454245356" exitCode=0 Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.961281 4821 generic.go:334] "Generic (PLEG): container finished" podID="62075a69-e200-43a4-89db-a4842953538d" containerID="ad240f5ed213d46e31676dda2c0a1445189c5fa7321dbe115428d7e672568000" exitCode=0 Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.961301 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerDied","Data":"25b3d1ebb730fc197b3632fdf1567f9beb55e7d616b4eb869cc1e30454245356"} Mar 09 19:07:42 crc kubenswrapper[4821]: I0309 19:07:42.961365 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerDied","Data":"ad240f5ed213d46e31676dda2c0a1445189c5fa7321dbe115428d7e672568000"} Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.025788 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.161936 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-sg-core-conf-yaml\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162019 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tcxt\" (UniqueName: \"kubernetes.io/projected/62075a69-e200-43a4-89db-a4842953538d-kube-api-access-7tcxt\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162061 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-config-data\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162124 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-run-httpd\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162164 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-scripts\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162238 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-ceilometer-tls-certs\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162278 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-combined-ca-bundle\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162386 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-log-httpd\") pod \"62075a69-e200-43a4-89db-a4842953538d\" (UID: \"62075a69-e200-43a4-89db-a4842953538d\") " Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.162793 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.163138 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.174637 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-scripts" (OuterVolumeSpecName: "scripts") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.185845 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62075a69-e200-43a4-89db-a4842953538d-kube-api-access-7tcxt" (OuterVolumeSpecName: "kube-api-access-7tcxt") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "kube-api-access-7tcxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.190175 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.220958 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.234598 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.255090 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-config-data" (OuterVolumeSpecName: "config-data") pod "62075a69-e200-43a4-89db-a4842953538d" (UID: "62075a69-e200-43a4-89db-a4842953538d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264032 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264061 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264071 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264080 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264088 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/62075a69-e200-43a4-89db-a4842953538d-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264096 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264105 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tcxt\" (UniqueName: \"kubernetes.io/projected/62075a69-e200-43a4-89db-a4842953538d-kube-api-access-7tcxt\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.264113 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62075a69-e200-43a4-89db-a4842953538d-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.971618 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"62075a69-e200-43a4-89db-a4842953538d","Type":"ContainerDied","Data":"b838dc4bbd16a456310f11449cd5cce2af9e91b57efb755ba3eed31fa77656bf"} Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.971665 4821 scope.go:117] "RemoveContainer" containerID="bf80002be6943f4f05f1052c387cf53477c7f8df6c13eef9ddcbde24a39cebcb" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.971708 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:43 crc kubenswrapper[4821]: I0309 19:07:43.998546 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.010632 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.030387 4821 scope.go:117] "RemoveContainer" containerID="cd72c32ad2c5e7478fe077f9a0b00d9312bead2acc4233b92a02606ed69d6d69" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.032435 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:44 crc kubenswrapper[4821]: E0309 19:07:44.032778 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-central-agent" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.032794 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-central-agent" Mar 09 19:07:44 crc kubenswrapper[4821]: E0309 19:07:44.032808 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="sg-core" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.032814 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="sg-core" Mar 09 19:07:44 crc kubenswrapper[4821]: E0309 19:07:44.032822 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-notification-agent" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.032829 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-notification-agent" Mar 09 19:07:44 crc kubenswrapper[4821]: E0309 19:07:44.032838 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="proxy-httpd" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.032843 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="proxy-httpd" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.033025 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="proxy-httpd" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.033039 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-central-agent" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.033051 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="ceilometer-notification-agent" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.033064 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="62075a69-e200-43a4-89db-a4842953538d" containerName="sg-core" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.034443 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.044932 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.091824 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.092069 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.093504 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.146224 4821 scope.go:117] "RemoveContainer" containerID="25b3d1ebb730fc197b3632fdf1567f9beb55e7d616b4eb869cc1e30454245356" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.177416 4821 scope.go:117] "RemoveContainer" containerID="ad240f5ed213d46e31676dda2c0a1445189c5fa7321dbe115428d7e672568000" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.179586 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.194574 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-run-httpd\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.194898 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-config-data\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.195144 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-scripts\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.195345 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.195392 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxp86\" (UniqueName: \"kubernetes.io/projected/b15f7a69-20aa-4d92-9873-a6263b7b59b3-kube-api-access-pxp86\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.195416 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.195447 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-log-httpd\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.195471 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.296683 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297037 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxp86\" (UniqueName: \"kubernetes.io/projected/b15f7a69-20aa-4d92-9873-a6263b7b59b3-kube-api-access-pxp86\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297059 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297081 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-log-httpd\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297096 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297129 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-run-httpd\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297175 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-config-data\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.297208 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-scripts\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.298577 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-run-httpd\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.299007 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-log-httpd\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.308147 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.309741 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.311393 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-scripts\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.314132 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.315210 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-config-data\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.326389 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxp86\" (UniqueName: \"kubernetes.io/projected/b15f7a69-20aa-4d92-9873-a6263b7b59b3-kube-api-access-pxp86\") pod \"ceilometer-0\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.428258 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.492988 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.501497 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.505588 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-operator-scripts\") pod \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.505653 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s67gl\" (UniqueName: \"kubernetes.io/projected/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-kube-api-access-s67gl\") pod \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.505866 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-operator-scripts\") pod \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\" (UID: \"77bdbf36-96d6-447f-bec3-fa2cf37efc1f\") " Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.505986 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88znk\" (UniqueName: \"kubernetes.io/projected/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-kube-api-access-88znk\") pod \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\" (UID: \"9a3a05e0-575d-45b2-9f8d-9ee5136aee47\") " Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.506406 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77bdbf36-96d6-447f-bec3-fa2cf37efc1f" (UID: "77bdbf36-96d6-447f-bec3-fa2cf37efc1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.506405 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a3a05e0-575d-45b2-9f8d-9ee5136aee47" (UID: "9a3a05e0-575d-45b2-9f8d-9ee5136aee47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.506830 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.506852 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.513592 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-kube-api-access-s67gl" (OuterVolumeSpecName: "kube-api-access-s67gl") pod "77bdbf36-96d6-447f-bec3-fa2cf37efc1f" (UID: "77bdbf36-96d6-447f-bec3-fa2cf37efc1f"). InnerVolumeSpecName "kube-api-access-s67gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.517998 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-kube-api-access-88znk" (OuterVolumeSpecName: "kube-api-access-88znk") pod "9a3a05e0-575d-45b2-9f8d-9ee5136aee47" (UID: "9a3a05e0-575d-45b2-9f8d-9ee5136aee47"). InnerVolumeSpecName "kube-api-access-88znk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.608173 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s67gl\" (UniqueName: \"kubernetes.io/projected/77bdbf36-96d6-447f-bec3-fa2cf37efc1f-kube-api-access-s67gl\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.608724 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88znk\" (UniqueName: \"kubernetes.io/projected/9a3a05e0-575d-45b2-9f8d-9ee5136aee47-kube-api-access-88znk\") on node \"crc\" DevicePath \"\"" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.908275 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.981544 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-vl4gk" event={"ID":"77bdbf36-96d6-447f-bec3-fa2cf37efc1f","Type":"ContainerDied","Data":"a6dfc85d989e6299cd4b390ac7b88c2f3ed9023e1ccdb0862c7b51d7bafd8345"} Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.981581 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6dfc85d989e6299cd4b390ac7b88c2f3ed9023e1ccdb0862c7b51d7bafd8345" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.982843 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-vl4gk" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.983727 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerStarted","Data":"d9f0fe2c7219621f6984fa4f5b11b1052348c8c25a2781ac48538ae326241786"} Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.984943 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" event={"ID":"9a3a05e0-575d-45b2-9f8d-9ee5136aee47","Type":"ContainerDied","Data":"0e4ad98a651682294cb94a596be5694c779a06685274e29ec650cbb4c5a3d56e"} Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.984965 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e4ad98a651682294cb94a596be5694c779a06685274e29ec650cbb4c5a3d56e" Mar 09 19:07:44 crc kubenswrapper[4821]: I0309 19:07:44.984993 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-1259-account-create-update-wb9tr" Mar 09 19:07:45 crc kubenswrapper[4821]: I0309 19:07:45.440007 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:45 crc kubenswrapper[4821]: I0309 19:07:45.561676 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62075a69-e200-43a4-89db-a4842953538d" path="/var/lib/kubelet/pods/62075a69-e200-43a4-89db-a4842953538d/volumes" Mar 09 19:07:45 crc kubenswrapper[4821]: I0309 19:07:45.996630 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerStarted","Data":"a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2"} Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.194383 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-79dk2"] Mar 09 19:07:46 crc kubenswrapper[4821]: E0309 19:07:46.194732 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3a05e0-575d-45b2-9f8d-9ee5136aee47" containerName="mariadb-account-create-update" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.194749 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3a05e0-575d-45b2-9f8d-9ee5136aee47" containerName="mariadb-account-create-update" Mar 09 19:07:46 crc kubenswrapper[4821]: E0309 19:07:46.194779 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bdbf36-96d6-447f-bec3-fa2cf37efc1f" containerName="mariadb-database-create" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.194786 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bdbf36-96d6-447f-bec3-fa2cf37efc1f" containerName="mariadb-database-create" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.194919 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="77bdbf36-96d6-447f-bec3-fa2cf37efc1f" containerName="mariadb-database-create" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.194935 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3a05e0-575d-45b2-9f8d-9ee5136aee47" containerName="mariadb-account-create-update" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.195595 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.200897 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.201468 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.206751 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-p8962" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.223370 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-79dk2"] Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.245283 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74096ec2-d80e-40cd-b06f-f71e4f8836b5-etc-machine-id\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.245370 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-scripts\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.245419 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-combined-ca-bundle\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.245440 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgnpc\" (UniqueName: \"kubernetes.io/projected/74096ec2-d80e-40cd-b06f-f71e4f8836b5-kube-api-access-jgnpc\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.245469 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-config-data\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.245492 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-db-sync-config-data\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.346959 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-db-sync-config-data\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.347061 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74096ec2-d80e-40cd-b06f-f71e4f8836b5-etc-machine-id\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.347126 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-scripts\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.347173 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74096ec2-d80e-40cd-b06f-f71e4f8836b5-etc-machine-id\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.347189 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-combined-ca-bundle\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.347233 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgnpc\" (UniqueName: \"kubernetes.io/projected/74096ec2-d80e-40cd-b06f-f71e4f8836b5-kube-api-access-jgnpc\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.347275 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-config-data\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.352857 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-db-sync-config-data\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.353581 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-config-data\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.353882 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-scripts\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.354067 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-combined-ca-bundle\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.362137 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgnpc\" (UniqueName: \"kubernetes.io/projected/74096ec2-d80e-40cd-b06f-f71e4f8836b5-kube-api-access-jgnpc\") pod \"cinder-db-sync-79dk2\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.575049 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:07:46 crc kubenswrapper[4821]: I0309 19:07:46.631644 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:47 crc kubenswrapper[4821]: I0309 19:07:47.007845 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerStarted","Data":"beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6"} Mar 09 19:07:47 crc kubenswrapper[4821]: I0309 19:07:47.008221 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerStarted","Data":"a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2"} Mar 09 19:07:47 crc kubenswrapper[4821]: I0309 19:07:47.091936 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-79dk2"] Mar 09 19:07:47 crc kubenswrapper[4821]: W0309 19:07:47.100542 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74096ec2_d80e_40cd_b06f_f71e4f8836b5.slice/crio-639d5ccf61c170228fd2c5fa48544f21c34c44232dd96c99ed1fd5f741bf3145 WatchSource:0}: Error finding container 639d5ccf61c170228fd2c5fa48544f21c34c44232dd96c99ed1fd5f741bf3145: Status 404 returned error can't find the container with id 639d5ccf61c170228fd2c5fa48544f21c34c44232dd96c99ed1fd5f741bf3145 Mar 09 19:07:47 crc kubenswrapper[4821]: I0309 19:07:47.805700 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:48 crc kubenswrapper[4821]: I0309 19:07:48.022237 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-79dk2" event={"ID":"74096ec2-d80e-40cd-b06f-f71e4f8836b5","Type":"ContainerStarted","Data":"639d5ccf61c170228fd2c5fa48544f21c34c44232dd96c99ed1fd5f741bf3145"} Mar 09 19:07:48 crc kubenswrapper[4821]: I0309 19:07:48.848576 4821 scope.go:117] "RemoveContainer" containerID="b82153cb286c7704a8d11c3ac47938c7b821756d4e36949d20e9b1bc9862c504" Mar 09 19:07:48 crc kubenswrapper[4821]: I0309 19:07:48.892760 4821 scope.go:117] "RemoveContainer" containerID="2835e603a77c00580d0373cb1e5d2a441cc4a52673a5d27fd6869cbd9bf7be70" Mar 09 19:07:48 crc kubenswrapper[4821]: I0309 19:07:48.914541 4821 scope.go:117] "RemoveContainer" containerID="ac6eb78880d48e7b7279e7cf48b50a405c8dd3d505267ccda45204365c9f3d51" Mar 09 19:07:48 crc kubenswrapper[4821]: I0309 19:07:48.950175 4821 scope.go:117] "RemoveContainer" containerID="2c007f1b37e1a4b2a647f73c675579904e0c29c7b18540ff7834e371e3714b55" Mar 09 19:07:49 crc kubenswrapper[4821]: I0309 19:07:49.002209 4821 scope.go:117] "RemoveContainer" containerID="702cf48c91f5529fc8ace20857452dfb5eacdd69ceeee8afb1819df3c14c953c" Mar 09 19:07:49 crc kubenswrapper[4821]: I0309 19:07:49.050427 4821 scope.go:117] "RemoveContainer" containerID="e796073e64a847b9b0ec4f1adcd4dbbb441425210cc00dcda5da9b95520dab01" Mar 09 19:07:49 crc kubenswrapper[4821]: I0309 19:07:49.057552 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:49 crc kubenswrapper[4821]: I0309 19:07:49.073803 4821 scope.go:117] "RemoveContainer" containerID="4044bb81d16ef5a360424fede998c379bf65781aa58d0ff260bde715169f3ee5" Mar 09 19:07:49 crc kubenswrapper[4821]: I0309 19:07:49.101380 4821 scope.go:117] "RemoveContainer" containerID="3e70b0c86876adc028a5880426f20f07a247a044079125b2036b3cc0dc880e10" Mar 09 19:07:50 crc kubenswrapper[4821]: I0309 19:07:50.064151 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerStarted","Data":"32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1"} Mar 09 19:07:50 crc kubenswrapper[4821]: I0309 19:07:50.089458 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.307191213 podStartE2EDuration="7.08944126s" podCreationTimestamp="2026-03-09 19:07:43 +0000 UTC" firstStartedPulling="2026-03-09 19:07:44.917279465 +0000 UTC m=+2602.078655321" lastFinishedPulling="2026-03-09 19:07:49.699529512 +0000 UTC m=+2606.860905368" observedRunningTime="2026-03-09 19:07:50.086025688 +0000 UTC m=+2607.247401544" watchObservedRunningTime="2026-03-09 19:07:50.08944126 +0000 UTC m=+2607.250817116" Mar 09 19:07:50 crc kubenswrapper[4821]: I0309 19:07:50.297263 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:50 crc kubenswrapper[4821]: I0309 19:07:50.552720 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:07:50 crc kubenswrapper[4821]: E0309 19:07:50.552967 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:07:51 crc kubenswrapper[4821]: I0309 19:07:51.085524 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:07:51 crc kubenswrapper[4821]: I0309 19:07:51.476564 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:52 crc kubenswrapper[4821]: I0309 19:07:52.701709 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:53 crc kubenswrapper[4821]: I0309 19:07:53.953658 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:55 crc kubenswrapper[4821]: I0309 19:07:55.176007 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:56 crc kubenswrapper[4821]: I0309 19:07:56.380923 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:57 crc kubenswrapper[4821]: I0309 19:07:57.594378 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:58 crc kubenswrapper[4821]: I0309 19:07:58.768959 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:07:59 crc kubenswrapper[4821]: I0309 19:07:59.958958 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.155272 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551388-xfwcw"] Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.156695 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.159710 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.160368 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.160560 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.182851 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551388-xfwcw"] Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.302004 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtwr5\" (UniqueName: \"kubernetes.io/projected/9adefedb-bb07-4049-98c1-0e2eb6165f92-kube-api-access-rtwr5\") pod \"auto-csr-approver-29551388-xfwcw\" (UID: \"9adefedb-bb07-4049-98c1-0e2eb6165f92\") " pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.403935 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtwr5\" (UniqueName: \"kubernetes.io/projected/9adefedb-bb07-4049-98c1-0e2eb6165f92-kube-api-access-rtwr5\") pod \"auto-csr-approver-29551388-xfwcw\" (UID: \"9adefedb-bb07-4049-98c1-0e2eb6165f92\") " pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.431833 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtwr5\" (UniqueName: \"kubernetes.io/projected/9adefedb-bb07-4049-98c1-0e2eb6165f92-kube-api-access-rtwr5\") pod \"auto-csr-approver-29551388-xfwcw\" (UID: \"9adefedb-bb07-4049-98c1-0e2eb6165f92\") " pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:00 crc kubenswrapper[4821]: I0309 19:08:00.493214 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:01 crc kubenswrapper[4821]: I0309 19:08:01.160310 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:02 crc kubenswrapper[4821]: I0309 19:08:02.387918 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:02 crc kubenswrapper[4821]: I0309 19:08:02.552840 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:08:02 crc kubenswrapper[4821]: E0309 19:08:02.553339 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:08:03 crc kubenswrapper[4821]: I0309 19:08:03.007077 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551388-xfwcw"] Mar 09 19:08:03 crc kubenswrapper[4821]: I0309 19:08:03.210750 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" event={"ID":"9adefedb-bb07-4049-98c1-0e2eb6165f92","Type":"ContainerStarted","Data":"bb2a226f8f4c57f7d28992228117999a380bf9112eeed295aa1c25e20d98b342"} Mar 09 19:08:03 crc kubenswrapper[4821]: I0309 19:08:03.555468 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:04 crc kubenswrapper[4821]: I0309 19:08:04.239161 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-79dk2" event={"ID":"74096ec2-d80e-40cd-b06f-f71e4f8836b5","Type":"ContainerStarted","Data":"ec44a816894a8b59a9c31982e0022953d74f50ae9c8a1fff04559a3fe0e4a4e0"} Mar 09 19:08:04 crc kubenswrapper[4821]: I0309 19:08:04.257971 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-79dk2" podStartSLOduration=2.875466207 podStartE2EDuration="18.257955003s" podCreationTimestamp="2026-03-09 19:07:46 +0000 UTC" firstStartedPulling="2026-03-09 19:07:47.103093699 +0000 UTC m=+2604.264469555" lastFinishedPulling="2026-03-09 19:08:02.485582475 +0000 UTC m=+2619.646958351" observedRunningTime="2026-03-09 19:08:04.2574788 +0000 UTC m=+2621.418854656" watchObservedRunningTime="2026-03-09 19:08:04.257955003 +0000 UTC m=+2621.419330859" Mar 09 19:08:04 crc kubenswrapper[4821]: I0309 19:08:04.747553 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:05 crc kubenswrapper[4821]: I0309 19:08:05.249931 4821 generic.go:334] "Generic (PLEG): container finished" podID="9adefedb-bb07-4049-98c1-0e2eb6165f92" containerID="2b89e734430035b445c70ff135c9d41caff1317d2d9fb07bc3217b2e0d65a793" exitCode=0 Mar 09 19:08:05 crc kubenswrapper[4821]: I0309 19:08:05.250021 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" event={"ID":"9adefedb-bb07-4049-98c1-0e2eb6165f92","Type":"ContainerDied","Data":"2b89e734430035b445c70ff135c9d41caff1317d2d9fb07bc3217b2e0d65a793"} Mar 09 19:08:05 crc kubenswrapper[4821]: I0309 19:08:05.960565 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:06 crc kubenswrapper[4821]: I0309 19:08:06.579217 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:06 crc kubenswrapper[4821]: I0309 19:08:06.663126 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtwr5\" (UniqueName: \"kubernetes.io/projected/9adefedb-bb07-4049-98c1-0e2eb6165f92-kube-api-access-rtwr5\") pod \"9adefedb-bb07-4049-98c1-0e2eb6165f92\" (UID: \"9adefedb-bb07-4049-98c1-0e2eb6165f92\") " Mar 09 19:08:06 crc kubenswrapper[4821]: I0309 19:08:06.669247 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adefedb-bb07-4049-98c1-0e2eb6165f92-kube-api-access-rtwr5" (OuterVolumeSpecName: "kube-api-access-rtwr5") pod "9adefedb-bb07-4049-98c1-0e2eb6165f92" (UID: "9adefedb-bb07-4049-98c1-0e2eb6165f92"). InnerVolumeSpecName "kube-api-access-rtwr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:06 crc kubenswrapper[4821]: I0309 19:08:06.765759 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtwr5\" (UniqueName: \"kubernetes.io/projected/9adefedb-bb07-4049-98c1-0e2eb6165f92-kube-api-access-rtwr5\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:07 crc kubenswrapper[4821]: I0309 19:08:07.125236 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:07 crc kubenswrapper[4821]: I0309 19:08:07.266667 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" event={"ID":"9adefedb-bb07-4049-98c1-0e2eb6165f92","Type":"ContainerDied","Data":"bb2a226f8f4c57f7d28992228117999a380bf9112eeed295aa1c25e20d98b342"} Mar 09 19:08:07 crc kubenswrapper[4821]: I0309 19:08:07.266705 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb2a226f8f4c57f7d28992228117999a380bf9112eeed295aa1c25e20d98b342" Mar 09 19:08:07 crc kubenswrapper[4821]: I0309 19:08:07.266761 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551388-xfwcw" Mar 09 19:08:07 crc kubenswrapper[4821]: I0309 19:08:07.653615 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551382-wcqzq"] Mar 09 19:08:07 crc kubenswrapper[4821]: I0309 19:08:07.661050 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551382-wcqzq"] Mar 09 19:08:08 crc kubenswrapper[4821]: I0309 19:08:08.279166 4821 generic.go:334] "Generic (PLEG): container finished" podID="74096ec2-d80e-40cd-b06f-f71e4f8836b5" containerID="ec44a816894a8b59a9c31982e0022953d74f50ae9c8a1fff04559a3fe0e4a4e0" exitCode=0 Mar 09 19:08:08 crc kubenswrapper[4821]: I0309 19:08:08.279277 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-79dk2" event={"ID":"74096ec2-d80e-40cd-b06f-f71e4f8836b5","Type":"ContainerDied","Data":"ec44a816894a8b59a9c31982e0022953d74f50ae9c8a1fff04559a3fe0e4a4e0"} Mar 09 19:08:08 crc kubenswrapper[4821]: I0309 19:08:08.363076 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.566718 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc4d5b1-3818-4f5c-91b9-afe46d95e537" path="/var/lib/kubelet/pods/7fc4d5b1-3818-4f5c-91b9-afe46d95e537/volumes" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.577247 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.638053 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716154 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-combined-ca-bundle\") pod \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716243 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-config-data\") pod \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716273 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74096ec2-d80e-40cd-b06f-f71e4f8836b5-etc-machine-id\") pod \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716349 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-db-sync-config-data\") pod \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716373 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgnpc\" (UniqueName: \"kubernetes.io/projected/74096ec2-d80e-40cd-b06f-f71e4f8836b5-kube-api-access-jgnpc\") pod \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716407 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-scripts\") pod \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\" (UID: \"74096ec2-d80e-40cd-b06f-f71e4f8836b5\") " Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716519 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74096ec2-d80e-40cd-b06f-f71e4f8836b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74096ec2-d80e-40cd-b06f-f71e4f8836b5" (UID: "74096ec2-d80e-40cd-b06f-f71e4f8836b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.716812 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74096ec2-d80e-40cd-b06f-f71e4f8836b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.722081 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74096ec2-d80e-40cd-b06f-f71e4f8836b5" (UID: "74096ec2-d80e-40cd-b06f-f71e4f8836b5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.722616 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-scripts" (OuterVolumeSpecName: "scripts") pod "74096ec2-d80e-40cd-b06f-f71e4f8836b5" (UID: "74096ec2-d80e-40cd-b06f-f71e4f8836b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.724070 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74096ec2-d80e-40cd-b06f-f71e4f8836b5-kube-api-access-jgnpc" (OuterVolumeSpecName: "kube-api-access-jgnpc") pod "74096ec2-d80e-40cd-b06f-f71e4f8836b5" (UID: "74096ec2-d80e-40cd-b06f-f71e4f8836b5"). InnerVolumeSpecName "kube-api-access-jgnpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.750214 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74096ec2-d80e-40cd-b06f-f71e4f8836b5" (UID: "74096ec2-d80e-40cd-b06f-f71e4f8836b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.775911 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-config-data" (OuterVolumeSpecName: "config-data") pod "74096ec2-d80e-40cd-b06f-f71e4f8836b5" (UID: "74096ec2-d80e-40cd-b06f-f71e4f8836b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.817927 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.817967 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.817979 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.817988 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgnpc\" (UniqueName: \"kubernetes.io/projected/74096ec2-d80e-40cd-b06f-f71e4f8836b5-kube-api-access-jgnpc\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:09 crc kubenswrapper[4821]: I0309 19:08:09.817999 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74096ec2-d80e-40cd-b06f-f71e4f8836b5-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.305733 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-79dk2" event={"ID":"74096ec2-d80e-40cd-b06f-f71e4f8836b5","Type":"ContainerDied","Data":"639d5ccf61c170228fd2c5fa48544f21c34c44232dd96c99ed1fd5f741bf3145"} Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.305800 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639d5ccf61c170228fd2c5fa48544f21c34c44232dd96c99ed1fd5f741bf3145" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.305813 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-79dk2" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.634033 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:10 crc kubenswrapper[4821]: E0309 19:08:10.634674 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adefedb-bb07-4049-98c1-0e2eb6165f92" containerName="oc" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.634686 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adefedb-bb07-4049-98c1-0e2eb6165f92" containerName="oc" Mar 09 19:08:10 crc kubenswrapper[4821]: E0309 19:08:10.634706 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74096ec2-d80e-40cd-b06f-f71e4f8836b5" containerName="cinder-db-sync" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.634712 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="74096ec2-d80e-40cd-b06f-f71e4f8836b5" containerName="cinder-db-sync" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.634860 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="74096ec2-d80e-40cd-b06f-f71e4f8836b5" containerName="cinder-db-sync" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.634879 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adefedb-bb07-4049-98c1-0e2eb6165f92" containerName="oc" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.635698 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.641076 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.642630 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.642831 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-p8962" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.643779 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.657363 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.659017 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.671052 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.680534 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.736034 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.741240 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.741308 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.741369 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.741671 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-scripts\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.741825 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.741875 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-sys\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742613 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742739 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742784 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742819 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742844 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-dev\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742873 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742937 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-scripts\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742966 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.742990 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-lib-modules\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743045 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743061 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-run\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743108 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743144 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8dl\" (UniqueName: \"kubernetes.io/projected/3f71db1c-e77e-4cb8-a2ed-89045415fd22-kube-api-access-kv8dl\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743186 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743216 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbm9\" (UniqueName: \"kubernetes.io/projected/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-kube-api-access-6rbm9\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743234 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.743250 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.793206 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.831414 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.836494 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.839512 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844404 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844437 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844492 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844525 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-scripts\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844546 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844566 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-sys\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844583 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844609 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844633 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844651 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844670 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-dev\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844689 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844707 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-scripts\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844725 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844743 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-lib-modules\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844784 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844801 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-run\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844819 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844842 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8dl\" (UniqueName: \"kubernetes.io/projected/3f71db1c-e77e-4cb8-a2ed-89045415fd22-kube-api-access-kv8dl\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844865 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844885 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbm9\" (UniqueName: \"kubernetes.io/projected/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-kube-api-access-6rbm9\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844907 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.844923 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.845173 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-dev\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.845387 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-run\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.845542 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.845749 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.846167 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-sys\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.846516 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.847402 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.847434 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-lib-modules\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.848489 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.850419 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.850600 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.850795 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.869452 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.869476 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-scripts\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.869946 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.871123 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.871500 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-scripts\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.872121 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.873371 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.875863 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.889762 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.890154 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.896968 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8dl\" (UniqueName: \"kubernetes.io/projected/3f71db1c-e77e-4cb8-a2ed-89045415fd22-kube-api-access-kv8dl\") pod \"cinder-backup-0\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.902078 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbm9\" (UniqueName: \"kubernetes.io/projected/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-kube-api-access-6rbm9\") pod \"cinder-scheduler-0\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946441 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-logs\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946496 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-scripts\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946524 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946560 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946584 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946613 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946630 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.946650 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnz7\" (UniqueName: \"kubernetes.io/projected/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-kube-api-access-bxnz7\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:10 crc kubenswrapper[4821]: I0309 19:08:10.969472 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.039210 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047735 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-logs\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047778 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-scripts\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047810 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047851 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047874 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047899 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047915 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.047934 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxnz7\" (UniqueName: \"kubernetes.io/projected/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-kube-api-access-bxnz7\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.048159 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-logs\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.056023 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.059385 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.066865 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.067629 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.070151 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxnz7\" (UniqueName: \"kubernetes.io/projected/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-kube-api-access-bxnz7\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.073934 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.082076 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-scripts\") pod \"cinder-api-0\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.253896 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.450858 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.628043 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:11 crc kubenswrapper[4821]: W0309 19:08:11.635509 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f71db1c_e77e_4cb8_a2ed_89045415fd22.slice/crio-105f5ed1dbae190dd2de7cb00bc2159a54db929319a024266e5d215a119dc1fd WatchSource:0}: Error finding container 105f5ed1dbae190dd2de7cb00bc2159a54db929319a024266e5d215a119dc1fd: Status 404 returned error can't find the container with id 105f5ed1dbae190dd2de7cb00bc2159a54db929319a024266e5d215a119dc1fd Mar 09 19:08:11 crc kubenswrapper[4821]: W0309 19:08:11.779688 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c4f640d_1ad1_4e12_a930_0d90f6b5f67e.slice/crio-984e89ec926acf16f70942263b7b7e21b1d5053166c2e54e66e45e8415818e73 WatchSource:0}: Error finding container 984e89ec926acf16f70942263b7b7e21b1d5053166c2e54e66e45e8415818e73: Status 404 returned error can't find the container with id 984e89ec926acf16f70942263b7b7e21b1d5053166c2e54e66e45e8415818e73 Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.781527 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:11 crc kubenswrapper[4821]: I0309 19:08:11.968795 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:12 crc kubenswrapper[4821]: I0309 19:08:12.345543 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e","Type":"ContainerStarted","Data":"984e89ec926acf16f70942263b7b7e21b1d5053166c2e54e66e45e8415818e73"} Mar 09 19:08:12 crc kubenswrapper[4821]: I0309 19:08:12.346484 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"3f71db1c-e77e-4cb8-a2ed-89045415fd22","Type":"ContainerStarted","Data":"105f5ed1dbae190dd2de7cb00bc2159a54db929319a024266e5d215a119dc1fd"} Mar 09 19:08:12 crc kubenswrapper[4821]: I0309 19:08:12.378497 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8","Type":"ContainerStarted","Data":"e3af97ec2a08c27495107582e30363347cb852cdbe32b86bc33995182c8dc505"} Mar 09 19:08:13 crc kubenswrapper[4821]: I0309 19:08:13.171680 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:13 crc kubenswrapper[4821]: I0309 19:08:13.360210 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:13 crc kubenswrapper[4821]: I0309 19:08:13.391455 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e","Type":"ContainerStarted","Data":"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33"} Mar 09 19:08:13 crc kubenswrapper[4821]: I0309 19:08:13.401619 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"3f71db1c-e77e-4cb8-a2ed-89045415fd22","Type":"ContainerStarted","Data":"aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e"} Mar 09 19:08:13 crc kubenswrapper[4821]: I0309 19:08:13.401732 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"3f71db1c-e77e-4cb8-a2ed-89045415fd22","Type":"ContainerStarted","Data":"3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e"} Mar 09 19:08:13 crc kubenswrapper[4821]: I0309 19:08:13.452482 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.399180304 podStartE2EDuration="3.452461332s" podCreationTimestamp="2026-03-09 19:08:10 +0000 UTC" firstStartedPulling="2026-03-09 19:08:11.637677284 +0000 UTC m=+2628.799053140" lastFinishedPulling="2026-03-09 19:08:12.690958312 +0000 UTC m=+2629.852334168" observedRunningTime="2026-03-09 19:08:13.446244493 +0000 UTC m=+2630.607620349" watchObservedRunningTime="2026-03-09 19:08:13.452461332 +0000 UTC m=+2630.613837188" Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.386795 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.423023 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8","Type":"ContainerStarted","Data":"7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03"} Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.423070 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8","Type":"ContainerStarted","Data":"cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871"} Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.427551 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e","Type":"ContainerStarted","Data":"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a"} Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.427797 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api-log" containerID="cri-o://17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33" gracePeriod=30 Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.427827 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api" containerID="cri-o://9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a" gracePeriod=30 Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.455776 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.62545219 podStartE2EDuration="4.455757105s" podCreationTimestamp="2026-03-09 19:08:10 +0000 UTC" firstStartedPulling="2026-03-09 19:08:11.458654892 +0000 UTC m=+2628.620030748" lastFinishedPulling="2026-03-09 19:08:12.288959786 +0000 UTC m=+2629.450335663" observedRunningTime="2026-03-09 19:08:14.447301066 +0000 UTC m=+2631.608676922" watchObservedRunningTime="2026-03-09 19:08:14.455757105 +0000 UTC m=+2631.617132961" Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.476905 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=4.4768880079999995 podStartE2EDuration="4.476888008s" podCreationTimestamp="2026-03-09 19:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:08:14.471468571 +0000 UTC m=+2631.632844447" watchObservedRunningTime="2026-03-09 19:08:14.476888008 +0000 UTC m=+2631.638263864" Mar 09 19:08:14 crc kubenswrapper[4821]: I0309 19:08:14.497002 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.070572 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152279 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-logs\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152560 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-combined-ca-bundle\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152576 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-etc-machine-id\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152651 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data-custom\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152720 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-scripts\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152749 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152780 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxnz7\" (UniqueName: \"kubernetes.io/projected/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-kube-api-access-bxnz7\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152824 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-cert-memcached-mtls\") pod \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\" (UID: \"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e\") " Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.152937 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-logs" (OuterVolumeSpecName: "logs") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.153160 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.156409 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.158695 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-scripts" (OuterVolumeSpecName: "scripts") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.159195 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.177621 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-kube-api-access-bxnz7" (OuterVolumeSpecName: "kube-api-access-bxnz7") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "kube-api-access-bxnz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.196465 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.207487 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data" (OuterVolumeSpecName: "config-data") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.236505 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" (UID: "5c4f640d-1ad1-4e12-a930-0d90f6b5f67e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254154 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254197 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254208 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254218 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254227 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254235 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxnz7\" (UniqueName: \"kubernetes.io/projected/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-kube-api-access-bxnz7\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.254244 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.437020 4821 generic.go:334] "Generic (PLEG): container finished" podID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerID="9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a" exitCode=0 Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.437050 4821 generic.go:334] "Generic (PLEG): container finished" podID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerID="17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33" exitCode=143 Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.437933 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.444464 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e","Type":"ContainerDied","Data":"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a"} Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.444524 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e","Type":"ContainerDied","Data":"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33"} Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.444535 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"5c4f640d-1ad1-4e12-a930-0d90f6b5f67e","Type":"ContainerDied","Data":"984e89ec926acf16f70942263b7b7e21b1d5053166c2e54e66e45e8415818e73"} Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.444557 4821 scope.go:117] "RemoveContainer" containerID="9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.468428 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.478519 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.481920 4821 scope.go:117] "RemoveContainer" containerID="17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.508892 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:15 crc kubenswrapper[4821]: E0309 19:08:15.509216 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api-log" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.509231 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api-log" Mar 09 19:08:15 crc kubenswrapper[4821]: E0309 19:08:15.509248 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.509254 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.509425 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api-log" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.509463 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" containerName="cinder-api" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.510238 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.511777 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.515103 4821 scope.go:117] "RemoveContainer" containerID="9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a" Mar 09 19:08:15 crc kubenswrapper[4821]: E0309 19:08:15.515711 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a\": container with ID starting with 9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a not found: ID does not exist" containerID="9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.515748 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a"} err="failed to get container status \"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a\": rpc error: code = NotFound desc = could not find container \"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a\": container with ID starting with 9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a not found: ID does not exist" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.515772 4821 scope.go:117] "RemoveContainer" containerID="17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33" Mar 09 19:08:15 crc kubenswrapper[4821]: E0309 19:08:15.516081 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33\": container with ID starting with 17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33 not found: ID does not exist" containerID="17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.516142 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33"} err="failed to get container status \"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33\": rpc error: code = NotFound desc = could not find container \"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33\": container with ID starting with 17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33 not found: ID does not exist" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.516168 4821 scope.go:117] "RemoveContainer" containerID="9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.519786 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a"} err="failed to get container status \"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a\": rpc error: code = NotFound desc = could not find container \"9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a\": container with ID starting with 9368dbfd743756c6b0bf7fc4beed0578c0c0afcc2a91fb5f258ae21dae31133a not found: ID does not exist" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.519828 4821 scope.go:117] "RemoveContainer" containerID="17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.520265 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33"} err="failed to get container status \"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33\": rpc error: code = NotFound desc = could not find container \"17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33\": container with ID starting with 17703e7d0f79bf4eeb8d5e7eaa853c84b268dc4695c945a1135d832706189f33 not found: ID does not exist" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.522457 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-internal-svc" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.526678 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-public-svc" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.565008 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566254 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af77949b-43e7-411f-81cc-455dcfd140fb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566295 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566368 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-scripts\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566401 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76g8j\" (UniqueName: \"kubernetes.io/projected/af77949b-43e7-411f-81cc-455dcfd140fb-kube-api-access-76g8j\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566445 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566479 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566496 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data-custom\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566515 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566569 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af77949b-43e7-411f-81cc-455dcfd140fb-logs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.566593 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.579032 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4f640d-1ad1-4e12-a930-0d90f6b5f67e" path="/var/lib/kubelet/pods/5c4f640d-1ad1-4e12-a930-0d90f6b5f67e/volumes" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.623434 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.668647 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-scripts\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.668743 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76g8j\" (UniqueName: \"kubernetes.io/projected/af77949b-43e7-411f-81cc-455dcfd140fb-kube-api-access-76g8j\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.668817 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.668921 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.668944 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data-custom\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.668970 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.669033 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af77949b-43e7-411f-81cc-455dcfd140fb-logs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.669064 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.669107 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af77949b-43e7-411f-81cc-455dcfd140fb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.669130 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.669740 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af77949b-43e7-411f-81cc-455dcfd140fb-logs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.669776 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af77949b-43e7-411f-81cc-455dcfd140fb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.673396 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.673679 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.673759 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-scripts\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.674463 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.680939 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data-custom\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.681222 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.687424 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76g8j\" (UniqueName: \"kubernetes.io/projected/af77949b-43e7-411f-81cc-455dcfd140fb-kube-api-access-76g8j\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.690121 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.830922 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:15 crc kubenswrapper[4821]: I0309 19:08:15.970101 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:16 crc kubenswrapper[4821]: I0309 19:08:16.042424 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:16 crc kubenswrapper[4821]: I0309 19:08:16.433732 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:16 crc kubenswrapper[4821]: W0309 19:08:16.441681 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf77949b_43e7_411f_81cc_455dcfd140fb.slice/crio-b42088e5c92ec667fd620e0a8113c134699916436d79518dd341d198b95d6c4d WatchSource:0}: Error finding container b42088e5c92ec667fd620e0a8113c134699916436d79518dd341d198b95d6c4d: Status 404 returned error can't find the container with id b42088e5c92ec667fd620e0a8113c134699916436d79518dd341d198b95d6c4d Mar 09 19:08:16 crc kubenswrapper[4821]: I0309 19:08:16.551447 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:08:16 crc kubenswrapper[4821]: E0309 19:08:16.551674 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:08:16 crc kubenswrapper[4821]: I0309 19:08:16.826165 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:17 crc kubenswrapper[4821]: I0309 19:08:17.462602 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"af77949b-43e7-411f-81cc-455dcfd140fb","Type":"ContainerStarted","Data":"bf321c9005f78e8b84c70deecd1c94e77d1f997e3832cb12e8143ee1f637a0d6"} Mar 09 19:08:17 crc kubenswrapper[4821]: I0309 19:08:17.463044 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"af77949b-43e7-411f-81cc-455dcfd140fb","Type":"ContainerStarted","Data":"b42088e5c92ec667fd620e0a8113c134699916436d79518dd341d198b95d6c4d"} Mar 09 19:08:18 crc kubenswrapper[4821]: I0309 19:08:18.076058 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:18 crc kubenswrapper[4821]: I0309 19:08:18.477864 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"af77949b-43e7-411f-81cc-455dcfd140fb","Type":"ContainerStarted","Data":"1ba729b63d04752442c6cc1d51e58a9e545957be3004b12637bd0680a024b545"} Mar 09 19:08:18 crc kubenswrapper[4821]: I0309 19:08:18.478039 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:18 crc kubenswrapper[4821]: I0309 19:08:18.505661 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.505639233 podStartE2EDuration="3.505639233s" podCreationTimestamp="2026-03-09 19:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:08:18.498776487 +0000 UTC m=+2635.660152353" watchObservedRunningTime="2026-03-09 19:08:18.505639233 +0000 UTC m=+2635.667015099" Mar 09 19:08:19 crc kubenswrapper[4821]: I0309 19:08:19.258012 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:20 crc kubenswrapper[4821]: I0309 19:08:20.436660 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.223800 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.283147 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.291717 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.341434 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.508296 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="cinder-backup" containerID="cri-o://3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e" gracePeriod=30 Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.508955 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="cinder-scheduler" containerID="cri-o://cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871" gracePeriod=30 Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.509364 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="probe" containerID="cri-o://aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e" gracePeriod=30 Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.509469 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="probe" containerID="cri-o://7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03" gracePeriod=30 Mar 09 19:08:21 crc kubenswrapper[4821]: I0309 19:08:21.641792 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.519888 4821 generic.go:334] "Generic (PLEG): container finished" podID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerID="7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03" exitCode=0 Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.519958 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8","Type":"ContainerDied","Data":"7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03"} Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.522374 4821 generic.go:334] "Generic (PLEG): container finished" podID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerID="aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e" exitCode=0 Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.522405 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"3f71db1c-e77e-4cb8-a2ed-89045415fd22","Type":"ContainerDied","Data":"aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e"} Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.533876 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.534389 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="a553e4f7-fcde-41f9-9a67-c319c2848109" containerName="watcher-decision-engine" containerID="cri-o://6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998" gracePeriod=30 Mar 09 19:08:22 crc kubenswrapper[4821]: I0309 19:08:22.841516 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.394363 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.394896 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-central-agent" containerID="cri-o://a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2" gracePeriod=30 Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.394942 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="sg-core" containerID="cri-o://beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6" gracePeriod=30 Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.394975 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-notification-agent" containerID="cri-o://a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2" gracePeriod=30 Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.394938 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="proxy-httpd" containerID="cri-o://32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1" gracePeriod=30 Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.542420 4821 generic.go:334] "Generic (PLEG): container finished" podID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerID="32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1" exitCode=0 Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.542452 4821 generic.go:334] "Generic (PLEG): container finished" podID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerID="beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6" exitCode=2 Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.542473 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerDied","Data":"32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1"} Mar 09 19:08:23 crc kubenswrapper[4821]: I0309 19:08:23.542517 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerDied","Data":"beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6"} Mar 09 19:08:24 crc kubenswrapper[4821]: I0309 19:08:24.010898 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:24 crc kubenswrapper[4821]: I0309 19:08:24.558996 4821 generic.go:334] "Generic (PLEG): container finished" podID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerID="a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2" exitCode=0 Mar 09 19:08:24 crc kubenswrapper[4821]: I0309 19:08:24.559057 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerDied","Data":"a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2"} Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.237569 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.471929 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.574370 4821 generic.go:334] "Generic (PLEG): container finished" podID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerID="a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2" exitCode=0 Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.574421 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerDied","Data":"a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2"} Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.574450 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b15f7a69-20aa-4d92-9873-a6263b7b59b3","Type":"ContainerDied","Data":"d9f0fe2c7219621f6984fa4f5b11b1052348c8c25a2781ac48538ae326241786"} Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.574467 4821 scope.go:117] "RemoveContainer" containerID="32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.574593 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.599802 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-log-httpd\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.599878 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-run-httpd\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.599970 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxp86\" (UniqueName: \"kubernetes.io/projected/b15f7a69-20aa-4d92-9873-a6263b7b59b3-kube-api-access-pxp86\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.600027 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-scripts\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.600071 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-ceilometer-tls-certs\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.600116 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-config-data\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.600146 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-combined-ca-bundle\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.600182 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-sg-core-conf-yaml\") pod \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\" (UID: \"b15f7a69-20aa-4d92-9873-a6263b7b59b3\") " Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.601747 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.601908 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.602068 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.613233 4821 scope.go:117] "RemoveContainer" containerID="beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.636710 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15f7a69-20aa-4d92-9873-a6263b7b59b3-kube-api-access-pxp86" (OuterVolumeSpecName: "kube-api-access-pxp86") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "kube-api-access-pxp86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.636996 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-scripts" (OuterVolumeSpecName: "scripts") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.678518 4821 scope.go:117] "RemoveContainer" containerID="a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.686340 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.690477 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.703480 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.703751 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15f7a69-20aa-4d92-9873-a6263b7b59b3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.703824 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxp86\" (UniqueName: \"kubernetes.io/projected/b15f7a69-20aa-4d92-9873-a6263b7b59b3-kube-api-access-pxp86\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.703892 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.703969 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.705049 4821 scope.go:117] "RemoveContainer" containerID="a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.719514 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.736060 4821 scope.go:117] "RemoveContainer" containerID="32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.736749 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1\": container with ID starting with 32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1 not found: ID does not exist" containerID="32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.737146 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1"} err="failed to get container status \"32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1\": rpc error: code = NotFound desc = could not find container \"32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1\": container with ID starting with 32cf2660cf0c56275d5b9359075572c35015f6e738a79df2b724161062328cb1 not found: ID does not exist" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.737260 4821 scope.go:117] "RemoveContainer" containerID="beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.737778 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6\": container with ID starting with beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6 not found: ID does not exist" containerID="beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.737826 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6"} err="failed to get container status \"beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6\": rpc error: code = NotFound desc = could not find container \"beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6\": container with ID starting with beb943dfcc398e2bc390ca5e9937fc05c8801cba0fb78703eb77f1245603cda6 not found: ID does not exist" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.737855 4821 scope.go:117] "RemoveContainer" containerID="a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.738150 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2\": container with ID starting with a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2 not found: ID does not exist" containerID="a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.738263 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2"} err="failed to get container status \"a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2\": rpc error: code = NotFound desc = could not find container \"a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2\": container with ID starting with a3f818405d4a480aafba7fa21ef4ec4d247131a52f61ec9d6f490ce22d2224c2 not found: ID does not exist" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.738408 4821 scope.go:117] "RemoveContainer" containerID="a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.738831 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2\": container with ID starting with a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2 not found: ID does not exist" containerID="a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.738932 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2"} err="failed to get container status \"a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2\": rpc error: code = NotFound desc = could not find container \"a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2\": container with ID starting with a9c2426d8fdc46e6c8cf64d158a333e2060fca2bab1f4ee82454bfa0de8778c2 not found: ID does not exist" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.778977 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-config-data" (OuterVolumeSpecName: "config-data") pod "b15f7a69-20aa-4d92-9873-a6263b7b59b3" (UID: "b15f7a69-20aa-4d92-9873-a6263b7b59b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.806452 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.807548 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15f7a69-20aa-4d92-9873-a6263b7b59b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.914108 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.936150 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.953914 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.954283 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="proxy-httpd" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954300 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="proxy-httpd" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.954312 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="sg-core" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954424 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="sg-core" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.954447 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-notification-agent" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954452 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-notification-agent" Mar 09 19:08:25 crc kubenswrapper[4821]: E0309 19:08:25.954466 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-central-agent" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954471 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-central-agent" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954619 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="proxy-httpd" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954633 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-notification-agent" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954641 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="sg-core" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.954654 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" containerName="ceilometer-central-agent" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.956080 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.970956 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.971168 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.975609 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:08:25 crc kubenswrapper[4821]: I0309 19:08:25.976980 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.118844 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.118908 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-run-httpd\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.118959 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.119005 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-log-httpd\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.119063 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hb68\" (UniqueName: \"kubernetes.io/projected/83dcc991-d43b-4801-be85-77ed3c084aa8-kube-api-access-5hb68\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.119094 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-config-data\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.119127 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-scripts\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.119157 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220282 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220356 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220392 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-run-httpd\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220436 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220478 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-log-httpd\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220540 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hb68\" (UniqueName: \"kubernetes.io/projected/83dcc991-d43b-4801-be85-77ed3c084aa8-kube-api-access-5hb68\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220580 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-config-data\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.220618 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-scripts\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.221051 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-run-httpd\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.222712 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-log-httpd\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.225987 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.226852 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-scripts\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.233337 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.234575 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-config-data\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.237347 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.243231 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hb68\" (UniqueName: \"kubernetes.io/projected/83dcc991-d43b-4801-be85-77ed3c084aa8-kube-api-access-5hb68\") pod \"ceilometer-0\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.325979 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.327130 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425544 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-scripts\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425598 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425621 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-etc-machine-id\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425735 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data-custom\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425798 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-cert-memcached-mtls\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425847 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbm9\" (UniqueName: \"kubernetes.io/projected/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-kube-api-access-6rbm9\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.425873 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-combined-ca-bundle\") pod \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\" (UID: \"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.429107 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.434690 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-scripts" (OuterVolumeSpecName: "scripts") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.434876 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.439593 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.455785 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-kube-api-access-6rbm9" (OuterVolumeSpecName: "kube-api-access-6rbm9") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "kube-api-access-6rbm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.455815 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.518443 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527256 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-nvme\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527339 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-scripts\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527355 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-lib-cinder\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527416 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-sys\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527436 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-lib-modules\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527455 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-run\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527519 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data-custom\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527539 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-dev\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527558 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527586 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-machine-id\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527608 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-iscsi\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527630 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-cert-memcached-mtls\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527657 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-brick\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527682 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-cinder\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527705 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-combined-ca-bundle\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.527721 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8dl\" (UniqueName: \"kubernetes.io/projected/3f71db1c-e77e-4cb8-a2ed-89045415fd22-kube-api-access-kv8dl\") pod \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\" (UID: \"3f71db1c-e77e-4cb8-a2ed-89045415fd22\") " Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.528029 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbm9\" (UniqueName: \"kubernetes.io/projected/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-kube-api-access-6rbm9\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.528040 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.528049 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.528058 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.528066 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.529503 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-dev" (OuterVolumeSpecName: "dev") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.529555 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.529787 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.529854 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.532405 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.532434 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.532452 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.532472 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-sys" (OuterVolumeSpecName: "sys") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.532498 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-run" (OuterVolumeSpecName: "run") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.532728 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.541588 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-scripts" (OuterVolumeSpecName: "scripts") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.547729 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.548864 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f71db1c-e77e-4cb8-a2ed-89045415fd22-kube-api-access-kv8dl" (OuterVolumeSpecName: "kube-api-access-kv8dl") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "kube-api-access-kv8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.599682 4821 generic.go:334] "Generic (PLEG): container finished" podID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerID="3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e" exitCode=0 Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.600188 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.600206 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"3f71db1c-e77e-4cb8-a2ed-89045415fd22","Type":"ContainerDied","Data":"3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e"} Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.600233 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"3f71db1c-e77e-4cb8-a2ed-89045415fd22","Type":"ContainerDied","Data":"105f5ed1dbae190dd2de7cb00bc2159a54db929319a024266e5d215a119dc1fd"} Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.600248 4821 scope.go:117] "RemoveContainer" containerID="aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.611726 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.611804 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.613383 4821 generic.go:334] "Generic (PLEG): container finished" podID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerID="cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871" exitCode=0 Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.613420 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8","Type":"ContainerDied","Data":"cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871"} Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.613443 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.613462 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8","Type":"ContainerDied","Data":"e3af97ec2a08c27495107582e30363347cb852cdbe32b86bc33995182c8dc505"} Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.617619 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data" (OuterVolumeSpecName: "config-data") pod "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" (UID: "52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.625292 4821 scope.go:117] "RemoveContainer" containerID="3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.631994 4821 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-brick\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632022 4821 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632031 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632238 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8dl\" (UniqueName: \"kubernetes.io/projected/3f71db1c-e77e-4cb8-a2ed-89045415fd22-kube-api-access-kv8dl\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632257 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632265 4821 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-nvme\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632273 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632283 4821 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632293 4821 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-sys\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632303 4821 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-lib-modules\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632313 4821 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-run\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632341 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632351 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632361 4821 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-dev\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632371 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.632382 4821 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f71db1c-e77e-4cb8-a2ed-89045415fd22-etc-iscsi\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.646897 4821 scope.go:117] "RemoveContainer" containerID="aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e" Mar 09 19:08:26 crc kubenswrapper[4821]: E0309 19:08:26.648681 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e\": container with ID starting with aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e not found: ID does not exist" containerID="aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.648750 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e"} err="failed to get container status \"aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e\": rpc error: code = NotFound desc = could not find container \"aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e\": container with ID starting with aca73d11491888496926f465b2660b751d170c5d34853a33f2df69ff6c11ea6e not found: ID does not exist" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.648775 4821 scope.go:117] "RemoveContainer" containerID="3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e" Mar 09 19:08:26 crc kubenswrapper[4821]: E0309 19:08:26.649130 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e\": container with ID starting with 3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e not found: ID does not exist" containerID="3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.649169 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e"} err="failed to get container status \"3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e\": rpc error: code = NotFound desc = could not find container \"3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e\": container with ID starting with 3a884ec89a1871af85ad701e09c9f6ee3a34f3fcfef58510bb86747862aed66e not found: ID does not exist" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.649195 4821 scope.go:117] "RemoveContainer" containerID="7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.690291 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data" (OuterVolumeSpecName: "config-data") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.692362 4821 scope.go:117] "RemoveContainer" containerID="cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.692516 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3f71db1c-e77e-4cb8-a2ed-89045415fd22" (UID: "3f71db1c-e77e-4cb8-a2ed-89045415fd22"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.723217 4821 scope.go:117] "RemoveContainer" containerID="7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03" Mar 09 19:08:26 crc kubenswrapper[4821]: E0309 19:08:26.723714 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03\": container with ID starting with 7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03 not found: ID does not exist" containerID="7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.723793 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03"} err="failed to get container status \"7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03\": rpc error: code = NotFound desc = could not find container \"7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03\": container with ID starting with 7d74df743d8af02ecf4dab5ca7265b0cc4c88c003501d72c9a3e3832242cba03 not found: ID does not exist" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.723955 4821 scope.go:117] "RemoveContainer" containerID="cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871" Mar 09 19:08:26 crc kubenswrapper[4821]: E0309 19:08:26.724367 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871\": container with ID starting with cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871 not found: ID does not exist" containerID="cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.724399 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871"} err="failed to get container status \"cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871\": rpc error: code = NotFound desc = could not find container \"cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871\": container with ID starting with cbc9a70748dc98d83ca75eb06b990ac78f0f939bd9bbc55b4c3e4fa74d098871 not found: ID does not exist" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.733727 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.733902 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f71db1c-e77e-4cb8-a2ed-89045415fd22-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.842890 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.934818 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.942715 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:26 crc kubenswrapper[4821]: I0309 19:08:26.990844 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.005586 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.019655 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: E0309 19:08:27.023622 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="cinder-scheduler" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.023786 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="cinder-scheduler" Mar 09 19:08:27 crc kubenswrapper[4821]: E0309 19:08:27.023868 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="cinder-backup" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.023970 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="cinder-backup" Mar 09 19:08:27 crc kubenswrapper[4821]: E0309 19:08:27.024050 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="probe" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.024100 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="probe" Mar 09 19:08:27 crc kubenswrapper[4821]: E0309 19:08:27.024156 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="probe" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.024204 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="probe" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.024410 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="cinder-backup" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.024477 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="cinder-scheduler" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.024536 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" containerName="probe" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.024596 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" containerName="probe" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.025567 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.029072 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.032224 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.033937 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.035686 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.042358 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.054628 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139629 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8fz\" (UniqueName: \"kubernetes.io/projected/044a622b-d62c-414f-afe7-48fb8b2bf7c7-kube-api-access-wg8fz\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139672 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139700 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139776 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-dev\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139829 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139872 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-scripts\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139895 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139940 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139955 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-lib-modules\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139969 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.139992 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140007 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140021 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-scripts\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140035 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140049 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-run\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140065 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140079 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140096 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140113 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qlv9\" (UniqueName: \"kubernetes.io/projected/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-kube-api-access-7qlv9\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140138 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140156 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-sys\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140177 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.140202 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.241919 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242298 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-sys\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242359 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242388 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242488 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8fz\" (UniqueName: \"kubernetes.io/projected/044a622b-d62c-414f-afe7-48fb8b2bf7c7-kube-api-access-wg8fz\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242514 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242539 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242585 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-dev\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242620 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242645 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-scripts\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242665 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242722 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242740 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242757 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242763 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-lib-modules\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242809 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-lib-modules\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242873 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242894 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242916 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-dev\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242939 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-sys\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.242875 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244011 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244057 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244079 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-run\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244107 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-scripts\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244106 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244136 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244193 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244234 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244266 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qlv9\" (UniqueName: \"kubernetes.io/projected/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-kube-api-access-7qlv9\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244568 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244602 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244714 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-run\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.244723 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.248574 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.248592 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-scripts\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.248966 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.248981 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.249308 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.251980 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-scripts\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.250358 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.253404 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.254085 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.259531 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.265802 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qlv9\" (UniqueName: \"kubernetes.io/projected/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-kube-api-access-7qlv9\") pod \"cinder-scheduler-0\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.271222 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8fz\" (UniqueName: \"kubernetes.io/projected/044a622b-d62c-414f-afe7-48fb8b2bf7c7-kube-api-access-wg8fz\") pod \"cinder-backup-0\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.360203 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.365890 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.553230 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:08:27 crc kubenswrapper[4821]: E0309 19:08:27.553767 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.570144 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f71db1c-e77e-4cb8-a2ed-89045415fd22" path="/var/lib/kubelet/pods/3f71db1c-e77e-4cb8-a2ed-89045415fd22/volumes" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.570999 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8" path="/var/lib/kubelet/pods/52913e39-f3e0-4e42-a8a1-cd4fae3d7cb8/volumes" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.571752 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15f7a69-20aa-4d92-9873-a6263b7b59b3" path="/var/lib/kubelet/pods/b15f7a69-20aa-4d92-9873-a6263b7b59b3/volumes" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.658180 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerStarted","Data":"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1"} Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.658230 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerStarted","Data":"75c6069233b04f2ec0fc2a2acf2d694069a6ff5b817f6aa5d0b4f9af005a3ad2"} Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.704521 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_a553e4f7-fcde-41f9-9a67-c319c2848109/watcher-decision-engine/0.log" Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.912312 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:27 crc kubenswrapper[4821]: W0309 19:08:27.922684 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4561a3ed_cd15_44d8_a86e_8d442a4f1d80.slice/crio-b068c66021de377ad362dbe242d2df0e678555ea5228526cc6865255b50a0649 WatchSource:0}: Error finding container b068c66021de377ad362dbe242d2df0e678555ea5228526cc6865255b50a0649: Status 404 returned error can't find the container with id b068c66021de377ad362dbe242d2df0e678555ea5228526cc6865255b50a0649 Mar 09 19:08:27 crc kubenswrapper[4821]: I0309 19:08:27.960335 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.062631 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:28 crc kubenswrapper[4821]: E0309 19:08:28.131982 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda553e4f7_fcde_41f9_9a67_c319c2848109.slice/crio-6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda553e4f7_fcde_41f9_9a67_c319c2848109.slice/crio-conmon-6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998.scope\": RecentStats: unable to find data in memory cache]" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.335708 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.462192 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a553e4f7-fcde-41f9-9a67-c319c2848109-logs\") pod \"a553e4f7-fcde-41f9-9a67-c319c2848109\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.462278 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-custom-prometheus-ca\") pod \"a553e4f7-fcde-41f9-9a67-c319c2848109\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.462333 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-cert-memcached-mtls\") pod \"a553e4f7-fcde-41f9-9a67-c319c2848109\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.462377 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-combined-ca-bundle\") pod \"a553e4f7-fcde-41f9-9a67-c319c2848109\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.462453 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq6vz\" (UniqueName: \"kubernetes.io/projected/a553e4f7-fcde-41f9-9a67-c319c2848109-kube-api-access-wq6vz\") pod \"a553e4f7-fcde-41f9-9a67-c319c2848109\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.462549 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-config-data\") pod \"a553e4f7-fcde-41f9-9a67-c319c2848109\" (UID: \"a553e4f7-fcde-41f9-9a67-c319c2848109\") " Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.463380 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a553e4f7-fcde-41f9-9a67-c319c2848109-logs" (OuterVolumeSpecName: "logs") pod "a553e4f7-fcde-41f9-9a67-c319c2848109" (UID: "a553e4f7-fcde-41f9-9a67-c319c2848109"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.480945 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a553e4f7-fcde-41f9-9a67-c319c2848109-kube-api-access-wq6vz" (OuterVolumeSpecName: "kube-api-access-wq6vz") pod "a553e4f7-fcde-41f9-9a67-c319c2848109" (UID: "a553e4f7-fcde-41f9-9a67-c319c2848109"). InnerVolumeSpecName "kube-api-access-wq6vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.514002 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a553e4f7-fcde-41f9-9a67-c319c2848109" (UID: "a553e4f7-fcde-41f9-9a67-c319c2848109"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.523401 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a553e4f7-fcde-41f9-9a67-c319c2848109" (UID: "a553e4f7-fcde-41f9-9a67-c319c2848109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.549842 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-config-data" (OuterVolumeSpecName: "config-data") pod "a553e4f7-fcde-41f9-9a67-c319c2848109" (UID: "a553e4f7-fcde-41f9-9a67-c319c2848109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.559763 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "a553e4f7-fcde-41f9-9a67-c319c2848109" (UID: "a553e4f7-fcde-41f9-9a67-c319c2848109"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.564709 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.564737 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.564747 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.564757 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq6vz\" (UniqueName: \"kubernetes.io/projected/a553e4f7-fcde-41f9-9a67-c319c2848109-kube-api-access-wq6vz\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.564771 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a553e4f7-fcde-41f9-9a67-c319c2848109-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.564779 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a553e4f7-fcde-41f9-9a67-c319c2848109-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.713052 4821 generic.go:334] "Generic (PLEG): container finished" podID="a553e4f7-fcde-41f9-9a67-c319c2848109" containerID="6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998" exitCode=0 Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.713334 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.713233 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"a553e4f7-fcde-41f9-9a67-c319c2848109","Type":"ContainerDied","Data":"6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998"} Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.713405 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"a553e4f7-fcde-41f9-9a67-c319c2848109","Type":"ContainerDied","Data":"d0a067434bba825d633043e89603d0a2c5337307050e96e37f34e5bd62c3f9ab"} Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.713428 4821 scope.go:117] "RemoveContainer" containerID="6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.726521 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"4561a3ed-cd15-44d8-a86e-8d442a4f1d80","Type":"ContainerStarted","Data":"b068c66021de377ad362dbe242d2df0e678555ea5228526cc6865255b50a0649"} Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.733545 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"044a622b-d62c-414f-afe7-48fb8b2bf7c7","Type":"ContainerStarted","Data":"4bf36e634d44a1fc99b65ae1cc16a3213298a2dbcfa05b999c4420c6f1502bc2"} Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.733590 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"044a622b-d62c-414f-afe7-48fb8b2bf7c7","Type":"ContainerStarted","Data":"03ab7d0588a0a83ddf89920ff2b4b4b2cf0895b58e42240cbcb76839ad0cc197"} Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.750924 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerStarted","Data":"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae"} Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.751465 4821 scope.go:117] "RemoveContainer" containerID="6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998" Mar 09 19:08:28 crc kubenswrapper[4821]: E0309 19:08:28.752723 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998\": container with ID starting with 6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998 not found: ID does not exist" containerID="6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.752752 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998"} err="failed to get container status \"6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998\": rpc error: code = NotFound desc = could not find container \"6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998\": container with ID starting with 6687a5c7eb2e7e8da42346f9bf45706841c55da2e805e957015ecc00ffade998 not found: ID does not exist" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.767381 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.772501 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.786377 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:28 crc kubenswrapper[4821]: E0309 19:08:28.786700 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a553e4f7-fcde-41f9-9a67-c319c2848109" containerName="watcher-decision-engine" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.786715 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="a553e4f7-fcde-41f9-9a67-c319c2848109" containerName="watcher-decision-engine" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.786854 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="a553e4f7-fcde-41f9-9a67-c319c2848109" containerName="watcher-decision-engine" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.787359 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.789819 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.805957 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.875353 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645b5\" (UniqueName: \"kubernetes.io/projected/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-kube-api-access-645b5\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.875642 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.875726 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.875854 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.876345 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.876515 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.981482 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.981865 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645b5\" (UniqueName: \"kubernetes.io/projected/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-kube-api-access-645b5\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.981917 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.981939 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.981976 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.982002 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.984650 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.993468 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:28 crc kubenswrapper[4821]: I0309 19:08:28.997160 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.000626 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.011731 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.016886 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645b5\" (UniqueName: \"kubernetes.io/projected/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-kube-api-access-645b5\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.144718 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.569648 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a553e4f7-fcde-41f9-9a67-c319c2848109" path="/var/lib/kubelet/pods/a553e4f7-fcde-41f9-9a67-c319c2848109/volumes" Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.758001 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.792970 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"4561a3ed-cd15-44d8-a86e-8d442a4f1d80","Type":"ContainerStarted","Data":"5929330278e814918751422a87b79546af727b9ea4929ad36d3fd16930056c98"} Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.803678 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"044a622b-d62c-414f-afe7-48fb8b2bf7c7","Type":"ContainerStarted","Data":"1f2305258f988cc1de25c49dd91e866063d285c213d1e0bc28c945ea4653c76e"} Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.819690 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerStarted","Data":"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1"} Mar 09 19:08:29 crc kubenswrapper[4821]: I0309 19:08:29.859347 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=3.859303792 podStartE2EDuration="3.859303792s" podCreationTimestamp="2026-03-09 19:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:08:29.849509106 +0000 UTC m=+2647.010884962" watchObservedRunningTime="2026-03-09 19:08:29.859303792 +0000 UTC m=+2647.020679648" Mar 09 19:08:30 crc kubenswrapper[4821]: I0309 19:08:30.841409 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3f20f5e1-98fe-4725-98bc-e68e6b5cca00","Type":"ContainerStarted","Data":"83b9f0c14ed16b9c35b651e6bbf38557b8ceb271448b546f3650d6ab9e5d3aab"} Mar 09 19:08:30 crc kubenswrapper[4821]: I0309 19:08:30.843203 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3f20f5e1-98fe-4725-98bc-e68e6b5cca00","Type":"ContainerStarted","Data":"905246e9c1a4eb402c047b4496e47b2d4415568ace43a983714e43f55855744c"} Mar 09 19:08:30 crc kubenswrapper[4821]: I0309 19:08:30.848395 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"4561a3ed-cd15-44d8-a86e-8d442a4f1d80","Type":"ContainerStarted","Data":"cc32cb2cf304053eef8ab7cbc0df0cd381412f665c8ea8749a64aeb62c1fd448"} Mar 09 19:08:30 crc kubenswrapper[4821]: I0309 19:08:30.875459 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.875440063 podStartE2EDuration="2.875440063s" podCreationTimestamp="2026-03-09 19:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:08:30.864864376 +0000 UTC m=+2648.026240242" watchObservedRunningTime="2026-03-09 19:08:30.875440063 +0000 UTC m=+2648.036815919" Mar 09 19:08:30 crc kubenswrapper[4821]: I0309 19:08:30.901810 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=4.901790507 podStartE2EDuration="4.901790507s" podCreationTimestamp="2026-03-09 19:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:08:30.893286667 +0000 UTC m=+2648.054662523" watchObservedRunningTime="2026-03-09 19:08:30.901790507 +0000 UTC m=+2648.063166363" Mar 09 19:08:31 crc kubenswrapper[4821]: I0309 19:08:31.209193 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:31 crc kubenswrapper[4821]: I0309 19:08:31.860679 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerStarted","Data":"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017"} Mar 09 19:08:31 crc kubenswrapper[4821]: I0309 19:08:31.861307 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:32 crc kubenswrapper[4821]: I0309 19:08:32.361282 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:32 crc kubenswrapper[4821]: I0309 19:08:32.366597 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:32 crc kubenswrapper[4821]: I0309 19:08:32.418187 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:33 crc kubenswrapper[4821]: I0309 19:08:33.681143 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:34 crc kubenswrapper[4821]: I0309 19:08:34.923562 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:36 crc kubenswrapper[4821]: I0309 19:08:36.128214 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:37 crc kubenswrapper[4821]: I0309 19:08:37.400843 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:37 crc kubenswrapper[4821]: I0309 19:08:37.633501 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:37 crc kubenswrapper[4821]: I0309 19:08:37.639079 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:37 crc kubenswrapper[4821]: I0309 19:08:37.676447 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=8.193763858 podStartE2EDuration="12.676423366s" podCreationTimestamp="2026-03-09 19:08:25 +0000 UTC" firstStartedPulling="2026-03-09 19:08:26.849052362 +0000 UTC m=+2644.010428218" lastFinishedPulling="2026-03-09 19:08:31.33171185 +0000 UTC m=+2648.493087726" observedRunningTime="2026-03-09 19:08:31.891681937 +0000 UTC m=+2649.053057793" watchObservedRunningTime="2026-03-09 19:08:37.676423366 +0000 UTC m=+2654.837799222" Mar 09 19:08:38 crc kubenswrapper[4821]: I0309 19:08:38.657158 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:39 crc kubenswrapper[4821]: I0309 19:08:39.146485 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:39 crc kubenswrapper[4821]: I0309 19:08:39.175367 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:39 crc kubenswrapper[4821]: I0309 19:08:39.922963 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:39 crc kubenswrapper[4821]: I0309 19:08:39.951576 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:39 crc kubenswrapper[4821]: I0309 19:08:39.975459 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.135866 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.391673 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-79dk2"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.402627 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-79dk2"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.432337 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.434384 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.434671 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="cinder-scheduler" containerID="cri-o://5929330278e814918751422a87b79546af727b9ea4929ad36d3fd16930056c98" gracePeriod=30 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.435018 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="probe" containerID="cri-o://cc32cb2cf304053eef8ab7cbc0df0cd381412f665c8ea8749a64aeb62c1fd448" gracePeriod=30 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.482769 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.483058 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="cinder-backup" containerID="cri-o://4bf36e634d44a1fc99b65ae1cc16a3213298a2dbcfa05b999c4420c6f1502bc2" gracePeriod=30 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.483184 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="probe" containerID="cri-o://1f2305258f988cc1de25c49dd91e866063d285c213d1e0bc28c945ea4653c76e" gracePeriod=30 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.516853 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.517351 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api-log" containerID="cri-o://bf321c9005f78e8b84c70deecd1c94e77d1f997e3832cb12e8143ee1f637a0d6" gracePeriod=30 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.517843 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api" containerID="cri-o://1ba729b63d04752442c6cc1d51e58a9e545957be3004b12637bd0680a024b545" gracePeriod=30 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.571570 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74096ec2-d80e-40cd-b06f-f71e4f8836b5" path="/var/lib/kubelet/pods/74096ec2-d80e-40cd-b06f-f71e4f8836b5/volumes" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.572259 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder1259-account-delete-m8s5l"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.573198 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder1259-account-delete-m8s5l"] Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.573273 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.636813 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pl2\" (UniqueName: \"kubernetes.io/projected/8f118fd3-1526-4727-83ea-bc87283b7ad9-kube-api-access-h6pl2\") pod \"cinder1259-account-delete-m8s5l\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.636856 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f118fd3-1526-4727-83ea-bc87283b7ad9-operator-scripts\") pod \"cinder1259-account-delete-m8s5l\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.738254 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pl2\" (UniqueName: \"kubernetes.io/projected/8f118fd3-1526-4727-83ea-bc87283b7ad9-kube-api-access-h6pl2\") pod \"cinder1259-account-delete-m8s5l\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.738580 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f118fd3-1526-4727-83ea-bc87283b7ad9-operator-scripts\") pod \"cinder1259-account-delete-m8s5l\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.739204 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f118fd3-1526-4727-83ea-bc87283b7ad9-operator-scripts\") pod \"cinder1259-account-delete-m8s5l\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.773409 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pl2\" (UniqueName: \"kubernetes.io/projected/8f118fd3-1526-4727-83ea-bc87283b7ad9-kube-api-access-h6pl2\") pod \"cinder1259-account-delete-m8s5l\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.900270 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.978099 4821 generic.go:334] "Generic (PLEG): container finished" podID="af77949b-43e7-411f-81cc-455dcfd140fb" containerID="bf321c9005f78e8b84c70deecd1c94e77d1f997e3832cb12e8143ee1f637a0d6" exitCode=143 Mar 09 19:08:41 crc kubenswrapper[4821]: I0309 19:08:41.978412 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"af77949b-43e7-411f-81cc-455dcfd140fb","Type":"ContainerDied","Data":"bf321c9005f78e8b84c70deecd1c94e77d1f997e3832cb12e8143ee1f637a0d6"} Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.419532 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder1259-account-delete-m8s5l"] Mar 09 19:08:42 crc kubenswrapper[4821]: W0309 19:08:42.425561 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f118fd3_1526_4727_83ea_bc87283b7ad9.slice/crio-01d325a56c3559d6d0fea9f331d6a2479c6232174ae457ee84ffbc2db81eca30 WatchSource:0}: Error finding container 01d325a56c3559d6d0fea9f331d6a2479c6232174ae457ee84ffbc2db81eca30: Status 404 returned error can't find the container with id 01d325a56c3559d6d0fea9f331d6a2479c6232174ae457ee84ffbc2db81eca30 Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.553477 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:08:42 crc kubenswrapper[4821]: E0309 19:08:42.553760 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.606804 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.987869 4821 generic.go:334] "Generic (PLEG): container finished" podID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerID="cc32cb2cf304053eef8ab7cbc0df0cd381412f665c8ea8749a64aeb62c1fd448" exitCode=0 Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.987917 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"4561a3ed-cd15-44d8-a86e-8d442a4f1d80","Type":"ContainerDied","Data":"cc32cb2cf304053eef8ab7cbc0df0cd381412f665c8ea8749a64aeb62c1fd448"} Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.990670 4821 generic.go:334] "Generic (PLEG): container finished" podID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerID="1f2305258f988cc1de25c49dd91e866063d285c213d1e0bc28c945ea4653c76e" exitCode=0 Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.990713 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"044a622b-d62c-414f-afe7-48fb8b2bf7c7","Type":"ContainerDied","Data":"1f2305258f988cc1de25c49dd91e866063d285c213d1e0bc28c945ea4653c76e"} Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.992529 4821 generic.go:334] "Generic (PLEG): container finished" podID="8f118fd3-1526-4727-83ea-bc87283b7ad9" containerID="9c60eeb0f071ed90da162e6b986e295458fc758379a0eb53ba06953b189b837a" exitCode=0 Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.992558 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" event={"ID":"8f118fd3-1526-4727-83ea-bc87283b7ad9","Type":"ContainerDied","Data":"9c60eeb0f071ed90da162e6b986e295458fc758379a0eb53ba06953b189b837a"} Mar 09 19:08:42 crc kubenswrapper[4821]: I0309 19:08:42.992571 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" event={"ID":"8f118fd3-1526-4727-83ea-bc87283b7ad9","Type":"ContainerStarted","Data":"01d325a56c3559d6d0fea9f331d6a2479c6232174ae457ee84ffbc2db81eca30"} Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.222432 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.222665 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3f20f5e1-98fe-4725-98bc-e68e6b5cca00" containerName="watcher-decision-engine" containerID="cri-o://83b9f0c14ed16b9c35b651e6bbf38557b8ceb271448b546f3650d6ab9e5d3aab" gracePeriod=30 Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.726002 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.726572 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-central-agent" containerID="cri-o://4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1" gracePeriod=30 Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.726695 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="proxy-httpd" containerID="cri-o://1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017" gracePeriod=30 Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.726748 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="sg-core" containerID="cri-o://7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1" gracePeriod=30 Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.726791 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-notification-agent" containerID="cri-o://f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae" gracePeriod=30 Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.807061 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:43 crc kubenswrapper[4821]: I0309 19:08:43.831148 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.0:3000/\": read tcp 10.217.0.2:57200->10.217.1.0:3000: read: connection reset by peer" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.003114 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerID="1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017" exitCode=0 Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.003144 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerID="7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1" exitCode=2 Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.003183 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerDied","Data":"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017"} Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.003235 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerDied","Data":"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1"} Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.365473 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.382268 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f118fd3-1526-4727-83ea-bc87283b7ad9-operator-scripts\") pod \"8f118fd3-1526-4727-83ea-bc87283b7ad9\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.382536 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6pl2\" (UniqueName: \"kubernetes.io/projected/8f118fd3-1526-4727-83ea-bc87283b7ad9-kube-api-access-h6pl2\") pod \"8f118fd3-1526-4727-83ea-bc87283b7ad9\" (UID: \"8f118fd3-1526-4727-83ea-bc87283b7ad9\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.383649 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f118fd3-1526-4727-83ea-bc87283b7ad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f118fd3-1526-4727-83ea-bc87283b7ad9" (UID: "8f118fd3-1526-4727-83ea-bc87283b7ad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.388876 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f118fd3-1526-4727-83ea-bc87283b7ad9-kube-api-access-h6pl2" (OuterVolumeSpecName: "kube-api-access-h6pl2") pod "8f118fd3-1526-4727-83ea-bc87283b7ad9" (UID: "8f118fd3-1526-4727-83ea-bc87283b7ad9"). InnerVolumeSpecName "kube-api-access-h6pl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.483622 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6pl2\" (UniqueName: \"kubernetes.io/projected/8f118fd3-1526-4727-83ea-bc87283b7ad9-kube-api-access-h6pl2\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.483651 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f118fd3-1526-4727-83ea-bc87283b7ad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.484434 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687489 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-sg-core-conf-yaml\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687540 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-config-data\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687606 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-ceilometer-tls-certs\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687634 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-combined-ca-bundle\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687681 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-scripts\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687720 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-run-httpd\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687795 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hb68\" (UniqueName: \"kubernetes.io/projected/83dcc991-d43b-4801-be85-77ed3c084aa8-kube-api-access-5hb68\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.687875 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-log-httpd\") pod \"83dcc991-d43b-4801-be85-77ed3c084aa8\" (UID: \"83dcc991-d43b-4801-be85-77ed3c084aa8\") " Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.688564 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.688579 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.691474 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-scripts" (OuterVolumeSpecName: "scripts") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.692183 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83dcc991-d43b-4801-be85-77ed3c084aa8-kube-api-access-5hb68" (OuterVolumeSpecName: "kube-api-access-5hb68") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "kube-api-access-5hb68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.737681 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.752064 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.760719 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.781544 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-config-data" (OuterVolumeSpecName: "config-data") pod "83dcc991-d43b-4801-be85-77ed3c084aa8" (UID: "83dcc991-d43b-4801-be85-77ed3c084aa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790249 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790284 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790301 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790312 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790341 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hb68\" (UniqueName: \"kubernetes.io/projected/83dcc991-d43b-4801-be85-77ed3c084aa8-kube-api-access-5hb68\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790353 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/83dcc991-d43b-4801-be85-77ed3c084aa8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790364 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:44 crc kubenswrapper[4821]: I0309 19:08:44.790375 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dcc991-d43b-4801-be85-77ed3c084aa8-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016418 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerID="f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae" exitCode=0 Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016451 4821 generic.go:334] "Generic (PLEG): container finished" podID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerID="4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1" exitCode=0 Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016492 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerDied","Data":"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae"} Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016522 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerDied","Data":"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1"} Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016532 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"83dcc991-d43b-4801-be85-77ed3c084aa8","Type":"ContainerDied","Data":"75c6069233b04f2ec0fc2a2acf2d694069a6ff5b817f6aa5d0b4f9af005a3ad2"} Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016550 4821 scope.go:117] "RemoveContainer" containerID="1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.016679 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.024272 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" event={"ID":"8f118fd3-1526-4727-83ea-bc87283b7ad9","Type":"ContainerDied","Data":"01d325a56c3559d6d0fea9f331d6a2479c6232174ae457ee84ffbc2db81eca30"} Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.024335 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d325a56c3559d6d0fea9f331d6a2479c6232174ae457ee84ffbc2db81eca30" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.024423 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder1259-account-delete-m8s5l" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.031486 4821 generic.go:334] "Generic (PLEG): container finished" podID="af77949b-43e7-411f-81cc-455dcfd140fb" containerID="1ba729b63d04752442c6cc1d51e58a9e545957be3004b12637bd0680a024b545" exitCode=0 Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.031531 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"af77949b-43e7-411f-81cc-455dcfd140fb","Type":"ContainerDied","Data":"1ba729b63d04752442c6cc1d51e58a9e545957be3004b12637bd0680a024b545"} Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.031558 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"af77949b-43e7-411f-81cc-455dcfd140fb","Type":"ContainerDied","Data":"b42088e5c92ec667fd620e0a8113c134699916436d79518dd341d198b95d6c4d"} Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.031570 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42088e5c92ec667fd620e0a8113c134699916436d79518dd341d198b95d6c4d" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.040202 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.075140 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.075301 4821 scope.go:117] "RemoveContainer" containerID="7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.095981 4821 scope.go:117] "RemoveContainer" containerID="f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101597 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-public-tls-certs\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101638 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af77949b-43e7-411f-81cc-455dcfd140fb-etc-machine-id\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101678 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data-custom\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101735 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af77949b-43e7-411f-81cc-455dcfd140fb-logs\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101754 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76g8j\" (UniqueName: \"kubernetes.io/projected/af77949b-43e7-411f-81cc-455dcfd140fb-kube-api-access-76g8j\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101770 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-internal-tls-certs\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101785 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-scripts\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101843 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-combined-ca-bundle\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101889 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-cert-memcached-mtls\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.101925 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data\") pod \"af77949b-43e7-411f-81cc-455dcfd140fb\" (UID: \"af77949b-43e7-411f-81cc-455dcfd140fb\") " Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.102535 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af77949b-43e7-411f-81cc-455dcfd140fb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.105538 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af77949b-43e7-411f-81cc-455dcfd140fb-logs" (OuterVolumeSpecName: "logs") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.107729 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.108580 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-scripts" (OuterVolumeSpecName: "scripts") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.115501 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.119617 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af77949b-43e7-411f-81cc-455dcfd140fb-kube-api-access-76g8j" (OuterVolumeSpecName: "kube-api-access-76g8j") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "kube-api-access-76g8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.135966 4821 scope.go:117] "RemoveContainer" containerID="4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.146003 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.156361 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.156918 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-notification-agent" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.156989 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-notification-agent" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.157369 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f118fd3-1526-4727-83ea-bc87283b7ad9" containerName="mariadb-account-delete" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.157438 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f118fd3-1526-4727-83ea-bc87283b7ad9" containerName="mariadb-account-delete" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.157493 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.157541 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.157590 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="sg-core" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.157636 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="sg-core" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.157690 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="proxy-httpd" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.157743 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="proxy-httpd" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.157794 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-central-agent" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.157840 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-central-agent" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.157894 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api-log" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.157940 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api-log" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.158135 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api-log" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.161368 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="proxy-httpd" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.161529 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-central-agent" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.161598 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" containerName="cinder-api" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.161649 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f118fd3-1526-4727-83ea-bc87283b7ad9" containerName="mariadb-account-delete" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.161710 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="ceilometer-notification-agent" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.161766 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" containerName="sg-core" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.163334 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.166646 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.166840 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.166886 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.167746 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.170006 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.171787 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data" (OuterVolumeSpecName: "config-data") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.196722 4821 scope.go:117] "RemoveContainer" containerID="1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.197601 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017\": container with ID starting with 1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017 not found: ID does not exist" containerID="1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.197637 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017"} err="failed to get container status \"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017\": rpc error: code = NotFound desc = could not find container \"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017\": container with ID starting with 1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017 not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.197687 4821 scope.go:117] "RemoveContainer" containerID="7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.197949 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1\": container with ID starting with 7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1 not found: ID does not exist" containerID="7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.197998 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1"} err="failed to get container status \"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1\": rpc error: code = NotFound desc = could not find container \"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1\": container with ID starting with 7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1 not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.198015 4821 scope.go:117] "RemoveContainer" containerID="f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.199246 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.199247 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae\": container with ID starting with f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae not found: ID does not exist" containerID="f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.199370 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae"} err="failed to get container status \"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae\": rpc error: code = NotFound desc = could not find container \"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae\": container with ID starting with f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.199395 4821 scope.go:117] "RemoveContainer" containerID="4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1" Mar 09 19:08:45 crc kubenswrapper[4821]: E0309 19:08:45.199666 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1\": container with ID starting with 4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1 not found: ID does not exist" containerID="4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.199746 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1"} err="failed to get container status \"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1\": rpc error: code = NotFound desc = could not find container \"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1\": container with ID starting with 4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1 not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.199888 4821 scope.go:117] "RemoveContainer" containerID="1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.200706 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017"} err="failed to get container status \"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017\": rpc error: code = NotFound desc = could not find container \"1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017\": container with ID starting with 1b5b22dfae09a066aa70240263bf61c21e74015c4dc4c93a9984d6a3af47d017 not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.200732 4821 scope.go:117] "RemoveContainer" containerID="7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.203449 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.203600 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-run-httpd\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.203670 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.203767 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.203855 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-log-httpd\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.203937 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204008 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-scripts\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204087 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5np98\" (UniqueName: \"kubernetes.io/projected/7840ceab-38ab-461f-b00b-8e136c5a4c23-kube-api-access-5np98\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204230 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204293 4821 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204468 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af77949b-43e7-411f-81cc-455dcfd140fb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204771 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204838 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af77949b-43e7-411f-81cc-455dcfd140fb-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.204891 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76g8j\" (UniqueName: \"kubernetes.io/projected/af77949b-43e7-411f-81cc-455dcfd140fb-kube-api-access-76g8j\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.205046 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1"} err="failed to get container status \"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1\": rpc error: code = NotFound desc = could not find container \"7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1\": container with ID starting with 7d79b3e2b83f64cf6c0ae33e04262e9b931c94179e2084b42d0d09d9fcaa43f1 not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.205131 4821 scope.go:117] "RemoveContainer" containerID="f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.205238 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.205279 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.205834 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae"} err="failed to get container status \"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae\": rpc error: code = NotFound desc = could not find container \"f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae\": container with ID starting with f4c21b06b5b219eb600a754873cd9d489a13a4721c7e4b76c2a020cc9720a8ae not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.205872 4821 scope.go:117] "RemoveContainer" containerID="4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.206163 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1"} err="failed to get container status \"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1\": rpc error: code = NotFound desc = could not find container \"4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1\": container with ID starting with 4bd24185f802ffa9226b501a19da137150fe7c7725bdd2b9ea4f3c489092eab1 not found: ID does not exist" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.212927 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.222087 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "af77949b-43e7-411f-81cc-455dcfd140fb" (UID: "af77949b-43e7-411f-81cc-455dcfd140fb"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307317 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307391 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-run-httpd\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307422 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307503 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307541 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-log-httpd\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307581 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307618 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-scripts\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307653 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5np98\" (UniqueName: \"kubernetes.io/projected/7840ceab-38ab-461f-b00b-8e136c5a4c23-kube-api-access-5np98\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307765 4821 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307782 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/af77949b-43e7-411f-81cc-455dcfd140fb-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.307929 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-run-httpd\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.308247 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-log-httpd\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.310869 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.311835 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.311949 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.312362 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.312596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-scripts\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.329041 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5np98\" (UniqueName: \"kubernetes.io/projected/7840ceab-38ab-461f-b00b-8e136c5a4c23-kube-api-access-5np98\") pod \"ceilometer-0\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.506294 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:45 crc kubenswrapper[4821]: I0309 19:08:45.566308 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83dcc991-d43b-4801-be85-77ed3c084aa8" path="/var/lib/kubelet/pods/83dcc991-d43b-4801-be85-77ed3c084aa8/volumes" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.005679 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.048830 4821 generic.go:334] "Generic (PLEG): container finished" podID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerID="5929330278e814918751422a87b79546af727b9ea4929ad36d3fd16930056c98" exitCode=0 Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.048909 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"4561a3ed-cd15-44d8-a86e-8d442a4f1d80","Type":"ContainerDied","Data":"5929330278e814918751422a87b79546af727b9ea4929ad36d3fd16930056c98"} Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.053200 4821 generic.go:334] "Generic (PLEG): container finished" podID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerID="4bf36e634d44a1fc99b65ae1cc16a3213298a2dbcfa05b999c4420c6f1502bc2" exitCode=0 Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.053269 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"044a622b-d62c-414f-afe7-48fb8b2bf7c7","Type":"ContainerDied","Data":"4bf36e634d44a1fc99b65ae1cc16a3213298a2dbcfa05b999c4420c6f1502bc2"} Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.054281 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerStarted","Data":"849c6ac0cdb36fa68f24fa26502e30b06000d6bc2b1452ad5872c7f9c6b91baa"} Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.055073 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.082516 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.087355 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.250202 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_3f20f5e1-98fe-4725-98bc-e68e6b5cca00/watcher-decision-engine/0.log" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.276226 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.281695 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330470 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330507 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-cinder\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330553 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330573 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-scripts\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330588 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-cert-memcached-mtls\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330618 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-scripts\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330620 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330637 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data-custom\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330685 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-nvme\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330711 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-sys\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330807 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-dev\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330833 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-iscsi\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330855 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-brick\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330878 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qlv9\" (UniqueName: \"kubernetes.io/projected/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-kube-api-access-7qlv9\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330908 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-run\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330937 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-combined-ca-bundle\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.330973 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-cert-memcached-mtls\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331009 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg8fz\" (UniqueName: \"kubernetes.io/projected/044a622b-d62c-414f-afe7-48fb8b2bf7c7-kube-api-access-wg8fz\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331028 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-combined-ca-bundle\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331049 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-lib-modules\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331070 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-etc-machine-id\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331091 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-lib-cinder\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331123 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data-custom\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331151 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-run" (OuterVolumeSpecName: "run") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331154 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-machine-id\") pod \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\" (UID: \"044a622b-d62c-414f-afe7-48fb8b2bf7c7\") " Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331179 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.331502 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332414 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332462 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332492 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332494 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-dev" (OuterVolumeSpecName: "dev") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332510 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332561 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332585 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-sys" (OuterVolumeSpecName: "sys") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332791 4821 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332820 4821 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-run\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332837 4821 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-lib-modules\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.332855 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.335844 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.336220 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.346557 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044a622b-d62c-414f-afe7-48fb8b2bf7c7-kube-api-access-wg8fz" (OuterVolumeSpecName: "kube-api-access-wg8fz") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "kube-api-access-wg8fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.346968 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-kube-api-access-7qlv9" (OuterVolumeSpecName: "kube-api-access-7qlv9") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "kube-api-access-7qlv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.346987 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-scripts" (OuterVolumeSpecName: "scripts") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.355573 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-scripts" (OuterVolumeSpecName: "scripts") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.398664 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.405403 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.433906 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data" (OuterVolumeSpecName: "config-data") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434066 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data\") pod \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\" (UID: \"4561a3ed-cd15-44d8-a86e-8d442a4f1d80\") " Mar 09 19:08:46 crc kubenswrapper[4821]: W0309 19:08:46.434171 4821 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4561a3ed-cd15-44d8-a86e-8d442a4f1d80/volumes/kubernetes.io~secret/config-data Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434182 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data" (OuterVolumeSpecName: "config-data") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434842 4821 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434873 4821 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434893 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434913 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434929 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434945 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434961 4821 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434978 4821 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-sys\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.434996 4821 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-nvme\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435011 4821 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-dev\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435026 4821 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-etc-iscsi\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435044 4821 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/044a622b-d62c-414f-afe7-48fb8b2bf7c7-var-locks-brick\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435061 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qlv9\" (UniqueName: \"kubernetes.io/projected/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-kube-api-access-7qlv9\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435078 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435094 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg8fz\" (UniqueName: \"kubernetes.io/projected/044a622b-d62c-414f-afe7-48fb8b2bf7c7-kube-api-access-wg8fz\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.435111 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.438150 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data" (OuterVolumeSpecName: "config-data") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.465482 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "044a622b-d62c-414f-afe7-48fb8b2bf7c7" (UID: "044a622b-d62c-414f-afe7-48fb8b2bf7c7"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.475299 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4561a3ed-cd15-44d8-a86e-8d442a4f1d80" (UID: "4561a3ed-cd15-44d8-a86e-8d442a4f1d80"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.536484 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.536540 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/044a622b-d62c-414f-afe7-48fb8b2bf7c7-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.536559 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4561a3ed-cd15-44d8-a86e-8d442a4f1d80-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.584585 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-vl4gk"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.591780 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-vl4gk"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.609505 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder1259-account-delete-m8s5l"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.619567 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-1259-account-create-update-wb9tr"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.626500 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder1259-account-delete-m8s5l"] Mar 09 19:08:46 crc kubenswrapper[4821]: I0309 19:08:46.632620 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-1259-account-create-update-wb9tr"] Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.079122 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"044a622b-d62c-414f-afe7-48fb8b2bf7c7","Type":"ContainerDied","Data":"03ab7d0588a0a83ddf89920ff2b4b4b2cf0895b58e42240cbcb76839ad0cc197"} Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.079541 4821 scope.go:117] "RemoveContainer" containerID="1f2305258f988cc1de25c49dd91e866063d285c213d1e0bc28c945ea4653c76e" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.079801 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.085059 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerStarted","Data":"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b"} Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.090136 4821 generic.go:334] "Generic (PLEG): container finished" podID="3f20f5e1-98fe-4725-98bc-e68e6b5cca00" containerID="83b9f0c14ed16b9c35b651e6bbf38557b8ceb271448b546f3650d6ab9e5d3aab" exitCode=0 Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.090243 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3f20f5e1-98fe-4725-98bc-e68e6b5cca00","Type":"ContainerDied","Data":"83b9f0c14ed16b9c35b651e6bbf38557b8ceb271448b546f3650d6ab9e5d3aab"} Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.090277 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3f20f5e1-98fe-4725-98bc-e68e6b5cca00","Type":"ContainerDied","Data":"905246e9c1a4eb402c047b4496e47b2d4415568ace43a983714e43f55855744c"} Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.090299 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905246e9c1a4eb402c047b4496e47b2d4415568ace43a983714e43f55855744c" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.093532 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"4561a3ed-cd15-44d8-a86e-8d442a4f1d80","Type":"ContainerDied","Data":"b068c66021de377ad362dbe242d2df0e678555ea5228526cc6865255b50a0649"} Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.093657 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.119476 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.132964 4821 scope.go:117] "RemoveContainer" containerID="4bf36e634d44a1fc99b65ae1cc16a3213298a2dbcfa05b999c4420c6f1502bc2" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.141967 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.145910 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-config-data\") pod \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.160736 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.160839 4821 scope.go:117] "RemoveContainer" containerID="cc32cb2cf304053eef8ab7cbc0df0cd381412f665c8ea8749a64aeb62c1fd448" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.171901 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.181387 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.208815 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-config-data" (OuterVolumeSpecName: "config-data") pod "3f20f5e1-98fe-4725-98bc-e68e6b5cca00" (UID: "3f20f5e1-98fe-4725-98bc-e68e6b5cca00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.224240 4821 scope.go:117] "RemoveContainer" containerID="5929330278e814918751422a87b79546af727b9ea4929ad36d3fd16930056c98" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.247949 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-cert-memcached-mtls\") pod \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.248122 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-custom-prometheus-ca\") pod \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.248148 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-combined-ca-bundle\") pod \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.248166 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-645b5\" (UniqueName: \"kubernetes.io/projected/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-kube-api-access-645b5\") pod \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.248196 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-logs\") pod \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\" (UID: \"3f20f5e1-98fe-4725-98bc-e68e6b5cca00\") " Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.248510 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.249042 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-logs" (OuterVolumeSpecName: "logs") pod "3f20f5e1-98fe-4725-98bc-e68e6b5cca00" (UID: "3f20f5e1-98fe-4725-98bc-e68e6b5cca00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.251234 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-kube-api-access-645b5" (OuterVolumeSpecName: "kube-api-access-645b5") pod "3f20f5e1-98fe-4725-98bc-e68e6b5cca00" (UID: "3f20f5e1-98fe-4725-98bc-e68e6b5cca00"). InnerVolumeSpecName "kube-api-access-645b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.274010 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3f20f5e1-98fe-4725-98bc-e68e6b5cca00" (UID: "3f20f5e1-98fe-4725-98bc-e68e6b5cca00"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.282120 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f20f5e1-98fe-4725-98bc-e68e6b5cca00" (UID: "3f20f5e1-98fe-4725-98bc-e68e6b5cca00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.306514 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3f20f5e1-98fe-4725-98bc-e68e6b5cca00" (UID: "3f20f5e1-98fe-4725-98bc-e68e6b5cca00"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.350146 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.350176 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.350185 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.350196 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-645b5\" (UniqueName: \"kubernetes.io/projected/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-kube-api-access-645b5\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.350207 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f20f5e1-98fe-4725-98bc-e68e6b5cca00-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.573525 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" path="/var/lib/kubelet/pods/044a622b-d62c-414f-afe7-48fb8b2bf7c7/volumes" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.574140 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" path="/var/lib/kubelet/pods/4561a3ed-cd15-44d8-a86e-8d442a4f1d80/volumes" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.574687 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77bdbf36-96d6-447f-bec3-fa2cf37efc1f" path="/var/lib/kubelet/pods/77bdbf36-96d6-447f-bec3-fa2cf37efc1f/volumes" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.575805 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f118fd3-1526-4727-83ea-bc87283b7ad9" path="/var/lib/kubelet/pods/8f118fd3-1526-4727-83ea-bc87283b7ad9/volumes" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.576263 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3a05e0-575d-45b2-9f8d-9ee5136aee47" path="/var/lib/kubelet/pods/9a3a05e0-575d-45b2-9f8d-9ee5136aee47/volumes" Mar 09 19:08:47 crc kubenswrapper[4821]: I0309 19:08:47.576775 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af77949b-43e7-411f-81cc-455dcfd140fb" path="/var/lib/kubelet/pods/af77949b-43e7-411f-81cc-455dcfd140fb/volumes" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.107192 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerStarted","Data":"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0"} Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.107223 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.158409 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.166910 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205390 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:48 crc kubenswrapper[4821]: E0309 19:08:48.205715 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="probe" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205729 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="probe" Mar 09 19:08:48 crc kubenswrapper[4821]: E0309 19:08:48.205739 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="cinder-backup" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205744 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="cinder-backup" Mar 09 19:08:48 crc kubenswrapper[4821]: E0309 19:08:48.205761 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="cinder-scheduler" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205767 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="cinder-scheduler" Mar 09 19:08:48 crc kubenswrapper[4821]: E0309 19:08:48.205789 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="probe" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205795 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="probe" Mar 09 19:08:48 crc kubenswrapper[4821]: E0309 19:08:48.205808 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f20f5e1-98fe-4725-98bc-e68e6b5cca00" containerName="watcher-decision-engine" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205814 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f20f5e1-98fe-4725-98bc-e68e6b5cca00" containerName="watcher-decision-engine" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205959 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="cinder-scheduler" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205971 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4561a3ed-cd15-44d8-a86e-8d442a4f1d80" containerName="probe" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205982 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="probe" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.205994 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="044a622b-d62c-414f-afe7-48fb8b2bf7c7" containerName="cinder-backup" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.206002 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f20f5e1-98fe-4725-98bc-e68e6b5cca00" containerName="watcher-decision-engine" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.206519 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.208260 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.226083 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.366337 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.366390 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.366471 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b89947-d017-4d12-9c39-9b86f2e38097-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.366515 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.366564 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.366609 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbqq\" (UniqueName: \"kubernetes.io/projected/52b89947-d017-4d12-9c39-9b86f2e38097-kube-api-access-crbqq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.468119 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.468535 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.468654 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbqq\" (UniqueName: \"kubernetes.io/projected/52b89947-d017-4d12-9c39-9b86f2e38097-kube-api-access-crbqq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.468762 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.468885 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.468987 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b89947-d017-4d12-9c39-9b86f2e38097-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.469387 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b89947-d017-4d12-9c39-9b86f2e38097-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.475281 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.476102 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.476341 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.483768 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbqq\" (UniqueName: \"kubernetes.io/projected/52b89947-d017-4d12-9c39-9b86f2e38097-kube-api-access-crbqq\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.488807 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.519449 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:48 crc kubenswrapper[4821]: I0309 19:08:48.997408 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:08:49 crc kubenswrapper[4821]: W0309 19:08:49.033825 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52b89947_d017_4d12_9c39_9b86f2e38097.slice/crio-769643bbf5f2912cd03f4d3d424c5d248074686ff03fcb0de700c7f7fe89c8e4 WatchSource:0}: Error finding container 769643bbf5f2912cd03f4d3d424c5d248074686ff03fcb0de700c7f7fe89c8e4: Status 404 returned error can't find the container with id 769643bbf5f2912cd03f4d3d424c5d248074686ff03fcb0de700c7f7fe89c8e4 Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.119774 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerStarted","Data":"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b"} Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.121255 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52b89947-d017-4d12-9c39-9b86f2e38097","Type":"ContainerStarted","Data":"769643bbf5f2912cd03f4d3d424c5d248074686ff03fcb0de700c7f7fe89c8e4"} Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.367465 4821 scope.go:117] "RemoveContainer" containerID="b50be02ae45820230e0e491d2c66ce64026d9a3d8b80a7f8f158693e9428d4cd" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.392818 4821 scope.go:117] "RemoveContainer" containerID="b2faeed741e7e59c22a0de2bd237de590f3c9cd0eec1d72e47e72b83478f7743" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.413821 4821 scope.go:117] "RemoveContainer" containerID="20a46d1741bd4c964d79f701d204ed53424c6769488c29bfd121fa5c6c396cc0" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.441621 4821 scope.go:117] "RemoveContainer" containerID="323fef1d8bdfa94be4aeb04c244a1872a2d564555e5f4e9059d8ac8b534bc4b4" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.495488 4821 scope.go:117] "RemoveContainer" containerID="a3de3fa5b0abd0880578873ab6b250d3b791523ac72a82a68f13e1852d71dca6" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.512480 4821 scope.go:117] "RemoveContainer" containerID="4eb5fc9676556af2c13adac98ad2e0f465c105fa4f530d034ccc19cbb29b171c" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.567405 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f20f5e1-98fe-4725-98bc-e68e6b5cca00" path="/var/lib/kubelet/pods/3f20f5e1-98fe-4725-98bc-e68e6b5cca00/volumes" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.569930 4821 scope.go:117] "RemoveContainer" containerID="8fffa671f45df4eb87ecc5da2cf25342ee1a6379c1ad1ede7b8730ead3567da2" Mar 09 19:08:49 crc kubenswrapper[4821]: I0309 19:08:49.595551 4821 scope.go:117] "RemoveContainer" containerID="46b6ccb57cbbc6bdd2b6c5f6bdeb38949ed0291892d21bff7357c585f6f8460b" Mar 09 19:08:50 crc kubenswrapper[4821]: I0309 19:08:50.131982 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52b89947-d017-4d12-9c39-9b86f2e38097","Type":"ContainerStarted","Data":"a5fd93d11590d5914a5f9189901db21b2b71b18687b29750a4eba8245492cffd"} Mar 09 19:08:50 crc kubenswrapper[4821]: I0309 19:08:50.152891 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.152872747 podStartE2EDuration="2.152872747s" podCreationTimestamp="2026-03-09 19:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:08:50.150260027 +0000 UTC m=+2667.311635913" watchObservedRunningTime="2026-03-09 19:08:50.152872747 +0000 UTC m=+2667.314248603" Mar 09 19:08:50 crc kubenswrapper[4821]: I0309 19:08:50.712263 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:51 crc kubenswrapper[4821]: I0309 19:08:51.148604 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerStarted","Data":"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50"} Mar 09 19:08:51 crc kubenswrapper[4821]: I0309 19:08:51.148954 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:08:51 crc kubenswrapper[4821]: I0309 19:08:51.183782 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.06859722 podStartE2EDuration="6.183762858s" podCreationTimestamp="2026-03-09 19:08:45 +0000 UTC" firstStartedPulling="2026-03-09 19:08:46.042440938 +0000 UTC m=+2663.203816794" lastFinishedPulling="2026-03-09 19:08:50.157606576 +0000 UTC m=+2667.318982432" observedRunningTime="2026-03-09 19:08:51.172618656 +0000 UTC m=+2668.333994522" watchObservedRunningTime="2026-03-09 19:08:51.183762858 +0000 UTC m=+2668.345138714" Mar 09 19:08:51 crc kubenswrapper[4821]: I0309 19:08:51.877269 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:53 crc kubenswrapper[4821]: I0309 19:08:53.054606 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:54 crc kubenswrapper[4821]: I0309 19:08:54.279237 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:55 crc kubenswrapper[4821]: I0309 19:08:55.451490 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:56 crc kubenswrapper[4821]: I0309 19:08:56.688691 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:57 crc kubenswrapper[4821]: I0309 19:08:57.551594 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:08:57 crc kubenswrapper[4821]: E0309 19:08:57.552006 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:08:57 crc kubenswrapper[4821]: I0309 19:08:57.892337 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:58 crc kubenswrapper[4821]: I0309 19:08:58.519792 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:58 crc kubenswrapper[4821]: I0309 19:08:58.569062 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:59 crc kubenswrapper[4821]: I0309 19:08:59.129552 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:08:59 crc kubenswrapper[4821]: I0309 19:08:59.220986 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:08:59 crc kubenswrapper[4821]: I0309 19:08:59.261647 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.381138 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_52b89947-d017-4d12-9c39-9b86f2e38097/watcher-decision-engine/0.log" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.510189 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.517077 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-6hlvv"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.616914 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher3f6e-account-delete-ktqzl"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.625153 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.647066 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.647256 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1bf3043c-0996-4743-9d7c-059b18df0896" containerName="watcher-applier" containerID="cri-o://6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23" gracePeriod=30 Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.669478 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher3f6e-account-delete-ktqzl"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.732843 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5681c639-9924-4f5f-9a5d-eb460597b7e0-operator-scripts\") pod \"watcher3f6e-account-delete-ktqzl\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.732963 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp9jh\" (UniqueName: \"kubernetes.io/projected/5681c639-9924-4f5f-9a5d-eb460597b7e0-kube-api-access-tp9jh\") pod \"watcher3f6e-account-delete-ktqzl\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.751299 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.835381 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp9jh\" (UniqueName: \"kubernetes.io/projected/5681c639-9924-4f5f-9a5d-eb460597b7e0-kube-api-access-tp9jh\") pod \"watcher3f6e-account-delete-ktqzl\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.835547 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5681c639-9924-4f5f-9a5d-eb460597b7e0-operator-scripts\") pod \"watcher3f6e-account-delete-ktqzl\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.836540 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5681c639-9924-4f5f-9a5d-eb460597b7e0-operator-scripts\") pod \"watcher3f6e-account-delete-ktqzl\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.854822 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.855044 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-kuttl-api-log" containerID="cri-o://5e12805ad9e8d22c553a868ab73584badea31e5e9ed98bc2df0dcd6d9b962297" gracePeriod=30 Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.855169 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-api" containerID="cri-o://196a17baca99aecdfc51292276e233d789379c5e57f8fa343f555ad91a51164e" gracePeriod=30 Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.868380 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp9jh\" (UniqueName: \"kubernetes.io/projected/5681c639-9924-4f5f-9a5d-eb460597b7e0-kube-api-access-tp9jh\") pod \"watcher3f6e-account-delete-ktqzl\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:00 crc kubenswrapper[4821]: I0309 19:09:00.956948 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:01 crc kubenswrapper[4821]: I0309 19:09:01.236104 4821 generic.go:334] "Generic (PLEG): container finished" podID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerID="5e12805ad9e8d22c553a868ab73584badea31e5e9ed98bc2df0dcd6d9b962297" exitCode=143 Mar 09 19:09:01 crc kubenswrapper[4821]: I0309 19:09:01.236180 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b64e7401-cb9d-41b0-bdd2-59b43c383583","Type":"ContainerDied","Data":"5e12805ad9e8d22c553a868ab73584badea31e5e9ed98bc2df0dcd6d9b962297"} Mar 09 19:09:01 crc kubenswrapper[4821]: I0309 19:09:01.237093 4821 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-dtbkv\" not found" Mar 09 19:09:01 crc kubenswrapper[4821]: E0309 19:09:01.244377 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:01 crc kubenswrapper[4821]: E0309 19:09:01.244429 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data podName:52b89947-d017-4d12-9c39-9b86f2e38097 nodeName:}" failed. No retries permitted until 2026-03-09 19:09:01.744412252 +0000 UTC m=+2678.905788108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:01 crc kubenswrapper[4821]: I0309 19:09:01.464442 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher3f6e-account-delete-ktqzl"] Mar 09 19:09:01 crc kubenswrapper[4821]: I0309 19:09:01.562009 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1bca1d-c800-411b-aa6f-f71c343914ea" path="/var/lib/kubelet/pods/ac1bca1d-c800-411b-aa6f-f71c343914ea/volumes" Mar 09 19:09:01 crc kubenswrapper[4821]: E0309 19:09:01.750601 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:01 crc kubenswrapper[4821]: E0309 19:09:01.750682 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data podName:52b89947-d017-4d12-9c39-9b86f2e38097 nodeName:}" failed. No retries permitted until 2026-03-09 19:09:02.750668004 +0000 UTC m=+2679.912043860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.245894 4821 generic.go:334] "Generic (PLEG): container finished" podID="5681c639-9924-4f5f-9a5d-eb460597b7e0" containerID="e5cd16a5e8a50d1d2db078e67d28c5512cf1065b3ca48491398f7f6c589d9591" exitCode=0 Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.246038 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" event={"ID":"5681c639-9924-4f5f-9a5d-eb460597b7e0","Type":"ContainerDied","Data":"e5cd16a5e8a50d1d2db078e67d28c5512cf1065b3ca48491398f7f6c589d9591"} Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.246237 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" event={"ID":"5681c639-9924-4f5f-9a5d-eb460597b7e0","Type":"ContainerStarted","Data":"d2a583437e2eaecfae9845e5ff59eec5bfd1c962be1aa20fdb1b8f9845922db6"} Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.247745 4821 generic.go:334] "Generic (PLEG): container finished" podID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerID="196a17baca99aecdfc51292276e233d789379c5e57f8fa343f555ad91a51164e" exitCode=0 Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.247926 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="52b89947-d017-4d12-9c39-9b86f2e38097" containerName="watcher-decision-engine" containerID="cri-o://a5fd93d11590d5914a5f9189901db21b2b71b18687b29750a4eba8245492cffd" gracePeriod=30 Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.248166 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b64e7401-cb9d-41b0-bdd2-59b43c383583","Type":"ContainerDied","Data":"196a17baca99aecdfc51292276e233d789379c5e57f8fa343f555ad91a51164e"} Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.476150 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.597168 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-cert-memcached-mtls\") pod \"b64e7401-cb9d-41b0-bdd2-59b43c383583\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.597252 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-config-data\") pod \"b64e7401-cb9d-41b0-bdd2-59b43c383583\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.597297 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64e7401-cb9d-41b0-bdd2-59b43c383583-logs\") pod \"b64e7401-cb9d-41b0-bdd2-59b43c383583\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.597369 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftfl9\" (UniqueName: \"kubernetes.io/projected/b64e7401-cb9d-41b0-bdd2-59b43c383583-kube-api-access-ftfl9\") pod \"b64e7401-cb9d-41b0-bdd2-59b43c383583\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.597433 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-combined-ca-bundle\") pod \"b64e7401-cb9d-41b0-bdd2-59b43c383583\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.597533 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-custom-prometheus-ca\") pod \"b64e7401-cb9d-41b0-bdd2-59b43c383583\" (UID: \"b64e7401-cb9d-41b0-bdd2-59b43c383583\") " Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.598045 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b64e7401-cb9d-41b0-bdd2-59b43c383583-logs" (OuterVolumeSpecName: "logs") pod "b64e7401-cb9d-41b0-bdd2-59b43c383583" (UID: "b64e7401-cb9d-41b0-bdd2-59b43c383583"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.599863 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64e7401-cb9d-41b0-bdd2-59b43c383583-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.608562 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64e7401-cb9d-41b0-bdd2-59b43c383583-kube-api-access-ftfl9" (OuterVolumeSpecName: "kube-api-access-ftfl9") pod "b64e7401-cb9d-41b0-bdd2-59b43c383583" (UID: "b64e7401-cb9d-41b0-bdd2-59b43c383583"). InnerVolumeSpecName "kube-api-access-ftfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.627843 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b64e7401-cb9d-41b0-bdd2-59b43c383583" (UID: "b64e7401-cb9d-41b0-bdd2-59b43c383583"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.631660 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b64e7401-cb9d-41b0-bdd2-59b43c383583" (UID: "b64e7401-cb9d-41b0-bdd2-59b43c383583"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.641252 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-config-data" (OuterVolumeSpecName: "config-data") pod "b64e7401-cb9d-41b0-bdd2-59b43c383583" (UID: "b64e7401-cb9d-41b0-bdd2-59b43c383583"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.673644 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b64e7401-cb9d-41b0-bdd2-59b43c383583" (UID: "b64e7401-cb9d-41b0-bdd2-59b43c383583"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.701658 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.701697 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.701709 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftfl9\" (UniqueName: \"kubernetes.io/projected/b64e7401-cb9d-41b0-bdd2-59b43c383583-kube-api-access-ftfl9\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.701718 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:02 crc kubenswrapper[4821]: I0309 19:09:02.701727 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b64e7401-cb9d-41b0-bdd2-59b43c383583-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:02 crc kubenswrapper[4821]: E0309 19:09:02.803403 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:02 crc kubenswrapper[4821]: E0309 19:09:02.803500 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data podName:52b89947-d017-4d12-9c39-9b86f2e38097 nodeName:}" failed. No retries permitted until 2026-03-09 19:09:04.803478639 +0000 UTC m=+2681.964854505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.103612 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.103960 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-central-agent" containerID="cri-o://e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" gracePeriod=30 Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.104046 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-notification-agent" containerID="cri-o://3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" gracePeriod=30 Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.104075 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="sg-core" containerID="cri-o://780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" gracePeriod=30 Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.104111 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="proxy-httpd" containerID="cri-o://1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" gracePeriod=30 Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.117609 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.5:3000/\": read tcp 10.217.0.2:48110->10.217.1.5:3000: read: connection reset by peer" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.258419 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b64e7401-cb9d-41b0-bdd2-59b43c383583","Type":"ContainerDied","Data":"59e1ec0a4461b34c61fcc205139129ae79ae905bfeb7ceaa738b0aab37bba175"} Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.258472 4821 scope.go:117] "RemoveContainer" containerID="196a17baca99aecdfc51292276e233d789379c5e57f8fa343f555ad91a51164e" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.258521 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.263739 4821 generic.go:334] "Generic (PLEG): container finished" podID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerID="780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" exitCode=2 Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.263805 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerDied","Data":"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b"} Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.279481 4821 scope.go:117] "RemoveContainer" containerID="5e12805ad9e8d22c553a868ab73584badea31e5e9ed98bc2df0dcd6d9b962297" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.303919 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.316031 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.570899 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" path="/var/lib/kubelet/pods/b64e7401-cb9d-41b0-bdd2-59b43c383583/volumes" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.654841 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:03 crc kubenswrapper[4821]: E0309 19:09:03.791259 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:09:03 crc kubenswrapper[4821]: E0309 19:09:03.792874 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:09:03 crc kubenswrapper[4821]: E0309 19:09:03.794038 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:09:03 crc kubenswrapper[4821]: E0309 19:09:03.794129 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="1bf3043c-0996-4743-9d7c-059b18df0896" containerName="watcher-applier" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.824605 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp9jh\" (UniqueName: \"kubernetes.io/projected/5681c639-9924-4f5f-9a5d-eb460597b7e0-kube-api-access-tp9jh\") pod \"5681c639-9924-4f5f-9a5d-eb460597b7e0\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.824864 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5681c639-9924-4f5f-9a5d-eb460597b7e0-operator-scripts\") pod \"5681c639-9924-4f5f-9a5d-eb460597b7e0\" (UID: \"5681c639-9924-4f5f-9a5d-eb460597b7e0\") " Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.825474 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5681c639-9924-4f5f-9a5d-eb460597b7e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5681c639-9924-4f5f-9a5d-eb460597b7e0" (UID: "5681c639-9924-4f5f-9a5d-eb460597b7e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.831232 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5681c639-9924-4f5f-9a5d-eb460597b7e0-kube-api-access-tp9jh" (OuterVolumeSpecName: "kube-api-access-tp9jh") pod "5681c639-9924-4f5f-9a5d-eb460597b7e0" (UID: "5681c639-9924-4f5f-9a5d-eb460597b7e0"). InnerVolumeSpecName "kube-api-access-tp9jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.926675 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5681c639-9924-4f5f-9a5d-eb460597b7e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.926710 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp9jh\" (UniqueName: \"kubernetes.io/projected/5681c639-9924-4f5f-9a5d-eb460597b7e0-kube-api-access-tp9jh\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:03 crc kubenswrapper[4821]: I0309 19:09:03.959637 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129393 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-scripts\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129443 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-combined-ca-bundle\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129481 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5np98\" (UniqueName: \"kubernetes.io/projected/7840ceab-38ab-461f-b00b-8e136c5a4c23-kube-api-access-5np98\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129522 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-log-httpd\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129612 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-run-httpd\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129689 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129766 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-sg-core-conf-yaml\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129798 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-ceilometer-tls-certs\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.129942 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.130241 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.130331 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.133041 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-scripts" (OuterVolumeSpecName: "scripts") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.134167 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7840ceab-38ab-461f-b00b-8e136c5a4c23-kube-api-access-5np98" (OuterVolumeSpecName: "kube-api-access-5np98") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "kube-api-access-5np98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.152143 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.175974 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.191102 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.231358 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data" (OuterVolumeSpecName: "config-data") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.231440 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data\") pod \"7840ceab-38ab-461f-b00b-8e136c5a4c23\" (UID: \"7840ceab-38ab-461f-b00b-8e136c5a4c23\") " Mar 09 19:09:04 crc kubenswrapper[4821]: W0309 19:09:04.231782 4821 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7840ceab-38ab-461f-b00b-8e136c5a4c23/volumes/kubernetes.io~secret/config-data Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.231799 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data" (OuterVolumeSpecName: "config-data") pod "7840ceab-38ab-461f-b00b-8e136c5a4c23" (UID: "7840ceab-38ab-461f-b00b-8e136c5a4c23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232054 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232091 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232111 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232131 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232146 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7840ceab-38ab-461f-b00b-8e136c5a4c23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232164 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5np98\" (UniqueName: \"kubernetes.io/projected/7840ceab-38ab-461f-b00b-8e136c5a4c23-kube-api-access-5np98\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.232181 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7840ceab-38ab-461f-b00b-8e136c5a4c23-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.276708 4821 generic.go:334] "Generic (PLEG): container finished" podID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerID="1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" exitCode=0 Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.276785 4821 generic.go:334] "Generic (PLEG): container finished" podID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerID="3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" exitCode=0 Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.276799 4821 generic.go:334] "Generic (PLEG): container finished" podID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerID="e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" exitCode=0 Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.276760 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.277065 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerDied","Data":"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50"} Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.277194 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerDied","Data":"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0"} Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.277288 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerDied","Data":"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b"} Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.277392 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7840ceab-38ab-461f-b00b-8e136c5a4c23","Type":"ContainerDied","Data":"849c6ac0cdb36fa68f24fa26502e30b06000d6bc2b1452ad5872c7f9c6b91baa"} Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.277457 4821 scope.go:117] "RemoveContainer" containerID="1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.278998 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" event={"ID":"5681c639-9924-4f5f-9a5d-eb460597b7e0","Type":"ContainerDied","Data":"d2a583437e2eaecfae9845e5ff59eec5bfd1c962be1aa20fdb1b8f9845922db6"} Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.279094 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2a583437e2eaecfae9845e5ff59eec5bfd1c962be1aa20fdb1b8f9845922db6" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.279358 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3f6e-account-delete-ktqzl" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.301084 4821 scope.go:117] "RemoveContainer" containerID="780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.324049 4821 scope.go:117] "RemoveContainer" containerID="3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.326198 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.335394 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.345925 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346400 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="proxy-httpd" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346421 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="proxy-httpd" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346447 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-kuttl-api-log" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346455 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-kuttl-api-log" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346469 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-api" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346476 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-api" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346490 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="sg-core" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346497 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="sg-core" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346509 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5681c639-9924-4f5f-9a5d-eb460597b7e0" containerName="mariadb-account-delete" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346516 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5681c639-9924-4f5f-9a5d-eb460597b7e0" containerName="mariadb-account-delete" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346528 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-notification-agent" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346536 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-notification-agent" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.346548 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-central-agent" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346555 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-central-agent" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346759 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="proxy-httpd" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346778 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-notification-agent" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346788 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-kuttl-api-log" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346800 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5681c639-9924-4f5f-9a5d-eb460597b7e0" containerName="mariadb-account-delete" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346812 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="ceilometer-central-agent" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346819 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" containerName="sg-core" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346832 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64e7401-cb9d-41b0-bdd2-59b43c383583" containerName="watcher-api" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.346816 4821 scope.go:117] "RemoveContainer" containerID="e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.349659 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.356754 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.357132 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.357438 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.360953 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.396572 4821 scope.go:117] "RemoveContainer" containerID="1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.399256 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": container with ID starting with 1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50 not found: ID does not exist" containerID="1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.399309 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50"} err="failed to get container status \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": rpc error: code = NotFound desc = could not find container \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": container with ID starting with 1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50 not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.399356 4821 scope.go:117] "RemoveContainer" containerID="780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.403367 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": container with ID starting with 780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b not found: ID does not exist" containerID="780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.403416 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b"} err="failed to get container status \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": rpc error: code = NotFound desc = could not find container \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": container with ID starting with 780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.403446 4821 scope.go:117] "RemoveContainer" containerID="3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.403852 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": container with ID starting with 3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0 not found: ID does not exist" containerID="3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.403881 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0"} err="failed to get container status \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": rpc error: code = NotFound desc = could not find container \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": container with ID starting with 3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0 not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.403900 4821 scope.go:117] "RemoveContainer" containerID="e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.404114 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": container with ID starting with e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b not found: ID does not exist" containerID="e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404137 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b"} err="failed to get container status \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": rpc error: code = NotFound desc = could not find container \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": container with ID starting with e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404155 4821 scope.go:117] "RemoveContainer" containerID="1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404391 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50"} err="failed to get container status \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": rpc error: code = NotFound desc = could not find container \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": container with ID starting with 1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50 not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404414 4821 scope.go:117] "RemoveContainer" containerID="780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404622 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b"} err="failed to get container status \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": rpc error: code = NotFound desc = could not find container \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": container with ID starting with 780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404642 4821 scope.go:117] "RemoveContainer" containerID="3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404951 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0"} err="failed to get container status \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": rpc error: code = NotFound desc = could not find container \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": container with ID starting with 3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0 not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.404972 4821 scope.go:117] "RemoveContainer" containerID="e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.405159 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b"} err="failed to get container status \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": rpc error: code = NotFound desc = could not find container \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": container with ID starting with e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.405183 4821 scope.go:117] "RemoveContainer" containerID="1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.405409 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50"} err="failed to get container status \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": rpc error: code = NotFound desc = could not find container \"1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50\": container with ID starting with 1e08f3aeeda5492de08ae299d12cc3cb14b33d0c9f79cf31f77c8956ff6e8f50 not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.405435 4821 scope.go:117] "RemoveContainer" containerID="780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.405807 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b"} err="failed to get container status \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": rpc error: code = NotFound desc = could not find container \"780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b\": container with ID starting with 780d07299ff35ded95d38056830d5b3f1938a652593eb4a2d766026e9d3f031b not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.405831 4821 scope.go:117] "RemoveContainer" containerID="3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.406190 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0"} err="failed to get container status \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": rpc error: code = NotFound desc = could not find container \"3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0\": container with ID starting with 3f37668c08e2eff16bda56baddf0494fae19149a324519ef1aa5e0f62912d9b0 not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.406221 4821 scope.go:117] "RemoveContainer" containerID="e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.407044 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b"} err="failed to get container status \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": rpc error: code = NotFound desc = could not find container \"e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b\": container with ID starting with e15c96b64bb48f882c41730d00e8aa3af3544bfd00325746c62151d9885abf2b not found: ID does not exist" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.536872 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.536930 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-config-data\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.536981 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-log-httpd\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.537038 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.537073 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-run-httpd\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.537100 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvwcx\" (UniqueName: \"kubernetes.io/projected/c6f5de90-1d37-46e6-9092-89b35c6dce9c-kube-api-access-lvwcx\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.537121 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-scripts\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.537178 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639253 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-config-data\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639337 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-log-httpd\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639400 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639445 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-run-httpd\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639472 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvwcx\" (UniqueName: \"kubernetes.io/projected/c6f5de90-1d37-46e6-9092-89b35c6dce9c-kube-api-access-lvwcx\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639492 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-scripts\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639538 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.639556 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.640260 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-run-httpd\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.641004 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-log-httpd\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.646926 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.648485 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.648673 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-scripts\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.648859 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-config-data\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.649596 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.665225 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvwcx\" (UniqueName: \"kubernetes.io/projected/c6f5de90-1d37-46e6-9092-89b35c6dce9c-kube-api-access-lvwcx\") pod \"ceilometer-0\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: I0309 19:09:04.670389 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.842423 4821 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:04 crc kubenswrapper[4821]: E0309 19:09:04.842710 4821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data podName:52b89947-d017-4d12-9c39-9b86f2e38097 nodeName:}" failed. No retries permitted until 2026-03-09 19:09:08.84269486 +0000 UTC m=+2686.004070716 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097") : secret "watcher-kuttl-decision-engine-config-data" not found Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.114434 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:05 crc kubenswrapper[4821]: W0309 19:09:05.125943 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6f5de90_1d37_46e6_9092_89b35c6dce9c.slice/crio-3bb1a391165383681887e2abfae6e04ede563ea252ba975289a3467504277e1d WatchSource:0}: Error finding container 3bb1a391165383681887e2abfae6e04ede563ea252ba975289a3467504277e1d: Status 404 returned error can't find the container with id 3bb1a391165383681887e2abfae6e04ede563ea252ba975289a3467504277e1d Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.290741 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerStarted","Data":"3bb1a391165383681887e2abfae6e04ede563ea252ba975289a3467504277e1d"} Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.578950 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7840ceab-38ab-461f-b00b-8e136c5a4c23" path="/var/lib/kubelet/pods/7840ceab-38ab-461f-b00b-8e136c5a4c23/volumes" Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.686356 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher3f6e-account-delete-ktqzl"] Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.693614 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nsd9f"] Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.700631 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz"] Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.706663 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-nsd9f"] Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.716825 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher3f6e-account-delete-ktqzl"] Mar 09 19:09:05 crc kubenswrapper[4821]: I0309 19:09:05.725988 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-3f6e-account-create-update-fbhkz"] Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.300481 4821 generic.go:334] "Generic (PLEG): container finished" podID="52b89947-d017-4d12-9c39-9b86f2e38097" containerID="a5fd93d11590d5914a5f9189901db21b2b71b18687b29750a4eba8245492cffd" exitCode=0 Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.300559 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52b89947-d017-4d12-9c39-9b86f2e38097","Type":"ContainerDied","Data":"a5fd93d11590d5914a5f9189901db21b2b71b18687b29750a4eba8245492cffd"} Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.302258 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerStarted","Data":"974366854c8821bd17d233956e156092a187419448d3a66b88f2c7191a3baac3"} Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.653108 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.778920 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-custom-prometheus-ca\") pod \"52b89947-d017-4d12-9c39-9b86f2e38097\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.779312 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data\") pod \"52b89947-d017-4d12-9c39-9b86f2e38097\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.779380 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-cert-memcached-mtls\") pod \"52b89947-d017-4d12-9c39-9b86f2e38097\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.779467 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b89947-d017-4d12-9c39-9b86f2e38097-logs\") pod \"52b89947-d017-4d12-9c39-9b86f2e38097\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.779531 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-combined-ca-bundle\") pod \"52b89947-d017-4d12-9c39-9b86f2e38097\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.779586 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crbqq\" (UniqueName: \"kubernetes.io/projected/52b89947-d017-4d12-9c39-9b86f2e38097-kube-api-access-crbqq\") pod \"52b89947-d017-4d12-9c39-9b86f2e38097\" (UID: \"52b89947-d017-4d12-9c39-9b86f2e38097\") " Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.779919 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b89947-d017-4d12-9c39-9b86f2e38097-logs" (OuterVolumeSpecName: "logs") pod "52b89947-d017-4d12-9c39-9b86f2e38097" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.784567 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b89947-d017-4d12-9c39-9b86f2e38097-kube-api-access-crbqq" (OuterVolumeSpecName: "kube-api-access-crbqq") pod "52b89947-d017-4d12-9c39-9b86f2e38097" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097"). InnerVolumeSpecName "kube-api-access-crbqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.805537 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "52b89947-d017-4d12-9c39-9b86f2e38097" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.828598 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52b89947-d017-4d12-9c39-9b86f2e38097" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.832083 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data" (OuterVolumeSpecName: "config-data") pod "52b89947-d017-4d12-9c39-9b86f2e38097" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.845883 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "52b89947-d017-4d12-9c39-9b86f2e38097" (UID: "52b89947-d017-4d12-9c39-9b86f2e38097"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.881106 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.881157 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crbqq\" (UniqueName: \"kubernetes.io/projected/52b89947-d017-4d12-9c39-9b86f2e38097-kube-api-access-crbqq\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.881170 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.881180 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.881189 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/52b89947-d017-4d12-9c39-9b86f2e38097-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:06 crc kubenswrapper[4821]: I0309 19:09:06.881197 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52b89947-d017-4d12-9c39-9b86f2e38097-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.334260 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerStarted","Data":"3ec1f1f452fc850609ca2598615aead194489a6da0bed980239c36031f0aef18"} Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.336494 4821 generic.go:334] "Generic (PLEG): container finished" podID="1bf3043c-0996-4743-9d7c-059b18df0896" containerID="6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23" exitCode=0 Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.336558 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1bf3043c-0996-4743-9d7c-059b18df0896","Type":"ContainerDied","Data":"6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23"} Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.338365 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"52b89947-d017-4d12-9c39-9b86f2e38097","Type":"ContainerDied","Data":"769643bbf5f2912cd03f4d3d424c5d248074686ff03fcb0de700c7f7fe89c8e4"} Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.338408 4821 scope.go:117] "RemoveContainer" containerID="a5fd93d11590d5914a5f9189901db21b2b71b18687b29750a4eba8245492cffd" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.338544 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.394244 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.415157 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.592219 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b89947-d017-4d12-9c39-9b86f2e38097" path="/var/lib/kubelet/pods/52b89947-d017-4d12-9c39-9b86f2e38097/volumes" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.592748 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5681c639-9924-4f5f-9a5d-eb460597b7e0" path="/var/lib/kubelet/pods/5681c639-9924-4f5f-9a5d-eb460597b7e0/volumes" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.593244 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c126569-ec50-4fa4-b063-1ddad5932f62" path="/var/lib/kubelet/pods/5c126569-ec50-4fa4-b063-1ddad5932f62/volumes" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.600064 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e1deb8-31e8-436b-a590-e4befb1e61da" path="/var/lib/kubelet/pods/90e1deb8-31e8-436b-a590-e4befb1e61da/volumes" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.608258 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.700959 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-cert-memcached-mtls\") pod \"1bf3043c-0996-4743-9d7c-059b18df0896\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.701101 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf3043c-0996-4743-9d7c-059b18df0896-logs\") pod \"1bf3043c-0996-4743-9d7c-059b18df0896\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.701143 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-config-data\") pod \"1bf3043c-0996-4743-9d7c-059b18df0896\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.701199 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-combined-ca-bundle\") pod \"1bf3043c-0996-4743-9d7c-059b18df0896\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.701277 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmkrx\" (UniqueName: \"kubernetes.io/projected/1bf3043c-0996-4743-9d7c-059b18df0896-kube-api-access-vmkrx\") pod \"1bf3043c-0996-4743-9d7c-059b18df0896\" (UID: \"1bf3043c-0996-4743-9d7c-059b18df0896\") " Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.707436 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf3043c-0996-4743-9d7c-059b18df0896-kube-api-access-vmkrx" (OuterVolumeSpecName: "kube-api-access-vmkrx") pod "1bf3043c-0996-4743-9d7c-059b18df0896" (UID: "1bf3043c-0996-4743-9d7c-059b18df0896"). InnerVolumeSpecName "kube-api-access-vmkrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.712020 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf3043c-0996-4743-9d7c-059b18df0896-logs" (OuterVolumeSpecName: "logs") pod "1bf3043c-0996-4743-9d7c-059b18df0896" (UID: "1bf3043c-0996-4743-9d7c-059b18df0896"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.745406 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf3043c-0996-4743-9d7c-059b18df0896" (UID: "1bf3043c-0996-4743-9d7c-059b18df0896"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.755275 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-config-data" (OuterVolumeSpecName: "config-data") pod "1bf3043c-0996-4743-9d7c-059b18df0896" (UID: "1bf3043c-0996-4743-9d7c-059b18df0896"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.779605 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1bf3043c-0996-4743-9d7c-059b18df0896" (UID: "1bf3043c-0996-4743-9d7c-059b18df0896"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.804357 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bf3043c-0996-4743-9d7c-059b18df0896-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.804396 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.804409 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.804421 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmkrx\" (UniqueName: \"kubernetes.io/projected/1bf3043c-0996-4743-9d7c-059b18df0896-kube-api-access-vmkrx\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:07 crc kubenswrapper[4821]: I0309 19:09:07.804432 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1bf3043c-0996-4743-9d7c-059b18df0896-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.351132 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerStarted","Data":"e1f905b108ca545f4199903d6c7592c89e0454d2f9d302ddfd0a777cdb3ddfea"} Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.352872 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"1bf3043c-0996-4743-9d7c-059b18df0896","Type":"ContainerDied","Data":"931953a041b49e2cff6c0324dbca22d4ec18a8980d6ee2b3ff16a9a589c1b205"} Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.352934 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.352944 4821 scope.go:117] "RemoveContainer" containerID="6d768dc1227dbe38216059ab13417e797a313f263c97db1d18fb6a2e79455f23" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.380974 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.390682 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.799583 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-d6s7z"] Mar 09 19:09:08 crc kubenswrapper[4821]: E0309 19:09:08.799967 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b89947-d017-4d12-9c39-9b86f2e38097" containerName="watcher-decision-engine" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.799983 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b89947-d017-4d12-9c39-9b86f2e38097" containerName="watcher-decision-engine" Mar 09 19:09:08 crc kubenswrapper[4821]: E0309 19:09:08.800011 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf3043c-0996-4743-9d7c-059b18df0896" containerName="watcher-applier" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.800018 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf3043c-0996-4743-9d7c-059b18df0896" containerName="watcher-applier" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.800207 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b89947-d017-4d12-9c39-9b86f2e38097" containerName="watcher-decision-engine" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.800228 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf3043c-0996-4743-9d7c-059b18df0896" containerName="watcher-applier" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.800915 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.811529 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-d6s7z"] Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.825361 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5"] Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.826696 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.832206 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.906160 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5"] Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.920665 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-operator-scripts\") pod \"watcher-db-create-d6s7z\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.920696 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfdr\" (UniqueName: \"kubernetes.io/projected/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-kube-api-access-mpfdr\") pod \"watcher-db-create-d6s7z\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.920837 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fsl8\" (UniqueName: \"kubernetes.io/projected/55a3b77b-71a0-4f39-8356-f7caa43d72a4-kube-api-access-4fsl8\") pod \"watcher-0f81-account-create-update-rtxq5\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:08 crc kubenswrapper[4821]: I0309 19:09:08.920946 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a3b77b-71a0-4f39-8356-f7caa43d72a4-operator-scripts\") pod \"watcher-0f81-account-create-update-rtxq5\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.023574 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-operator-scripts\") pod \"watcher-db-create-d6s7z\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.023613 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfdr\" (UniqueName: \"kubernetes.io/projected/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-kube-api-access-mpfdr\") pod \"watcher-db-create-d6s7z\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.023656 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fsl8\" (UniqueName: \"kubernetes.io/projected/55a3b77b-71a0-4f39-8356-f7caa43d72a4-kube-api-access-4fsl8\") pod \"watcher-0f81-account-create-update-rtxq5\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.023706 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a3b77b-71a0-4f39-8356-f7caa43d72a4-operator-scripts\") pod \"watcher-0f81-account-create-update-rtxq5\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.024531 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a3b77b-71a0-4f39-8356-f7caa43d72a4-operator-scripts\") pod \"watcher-0f81-account-create-update-rtxq5\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.024747 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-operator-scripts\") pod \"watcher-db-create-d6s7z\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.044039 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfdr\" (UniqueName: \"kubernetes.io/projected/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-kube-api-access-mpfdr\") pod \"watcher-db-create-d6s7z\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.045661 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fsl8\" (UniqueName: \"kubernetes.io/projected/55a3b77b-71a0-4f39-8356-f7caa43d72a4-kube-api-access-4fsl8\") pod \"watcher-0f81-account-create-update-rtxq5\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.130005 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.163252 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.553697 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:09:09 crc kubenswrapper[4821]: E0309 19:09:09.554168 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.567246 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf3043c-0996-4743-9d7c-059b18df0896" path="/var/lib/kubelet/pods/1bf3043c-0996-4743-9d7c-059b18df0896/volumes" Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.719060 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-d6s7z"] Mar 09 19:09:09 crc kubenswrapper[4821]: I0309 19:09:09.813157 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5"] Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.383607 4821 generic.go:334] "Generic (PLEG): container finished" podID="55a3b77b-71a0-4f39-8356-f7caa43d72a4" containerID="a4836d82ed6198a6ff42eeebdf325602696b7d790cfb876fb23cf281737e671f" exitCode=0 Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.383694 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" event={"ID":"55a3b77b-71a0-4f39-8356-f7caa43d72a4","Type":"ContainerDied","Data":"a4836d82ed6198a6ff42eeebdf325602696b7d790cfb876fb23cf281737e671f"} Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.383738 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" event={"ID":"55a3b77b-71a0-4f39-8356-f7caa43d72a4","Type":"ContainerStarted","Data":"b77f789d3b6464657cb7e9adcf5a9469b1141df3d9d7a123bbcf5af755efe7f6"} Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.385510 4821 generic.go:334] "Generic (PLEG): container finished" podID="4716ce55-8666-4ad9-866b-e2f3f88cd5e7" containerID="9ad8f8cb25a5e57320f9803f8aea0e8eb977b4fd42e5561645b66fa71c87249a" exitCode=0 Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.385653 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-d6s7z" event={"ID":"4716ce55-8666-4ad9-866b-e2f3f88cd5e7","Type":"ContainerDied","Data":"9ad8f8cb25a5e57320f9803f8aea0e8eb977b4fd42e5561645b66fa71c87249a"} Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.385688 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-d6s7z" event={"ID":"4716ce55-8666-4ad9-866b-e2f3f88cd5e7","Type":"ContainerStarted","Data":"a03754fbdfd14fe157ce3c4574d7053e21c33d8d8993642e92b21821bc55ecd2"} Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.390246 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerStarted","Data":"f97e4bf6575fc4f665b81de8ce8623d441931b3f3621ff33c7f62e93cf5ab791"} Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.390624 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:10 crc kubenswrapper[4821]: I0309 19:09:10.437695 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.037314048 podStartE2EDuration="6.437674686s" podCreationTimestamp="2026-03-09 19:09:04 +0000 UTC" firstStartedPulling="2026-03-09 19:09:05.128903997 +0000 UTC m=+2682.290279853" lastFinishedPulling="2026-03-09 19:09:09.529264635 +0000 UTC m=+2686.690640491" observedRunningTime="2026-03-09 19:09:10.425861276 +0000 UTC m=+2687.587237172" watchObservedRunningTime="2026-03-09 19:09:10.437674686 +0000 UTC m=+2687.599050552" Mar 09 19:09:11 crc kubenswrapper[4821]: I0309 19:09:11.922703 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:11 crc kubenswrapper[4821]: I0309 19:09:11.928827 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.080480 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fsl8\" (UniqueName: \"kubernetes.io/projected/55a3b77b-71a0-4f39-8356-f7caa43d72a4-kube-api-access-4fsl8\") pod \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.080536 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a3b77b-71a0-4f39-8356-f7caa43d72a4-operator-scripts\") pod \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\" (UID: \"55a3b77b-71a0-4f39-8356-f7caa43d72a4\") " Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.080561 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfdr\" (UniqueName: \"kubernetes.io/projected/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-kube-api-access-mpfdr\") pod \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.080656 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-operator-scripts\") pod \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\" (UID: \"4716ce55-8666-4ad9-866b-e2f3f88cd5e7\") " Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.081903 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4716ce55-8666-4ad9-866b-e2f3f88cd5e7" (UID: "4716ce55-8666-4ad9-866b-e2f3f88cd5e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.083072 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a3b77b-71a0-4f39-8356-f7caa43d72a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55a3b77b-71a0-4f39-8356-f7caa43d72a4" (UID: "55a3b77b-71a0-4f39-8356-f7caa43d72a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.087237 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-kube-api-access-mpfdr" (OuterVolumeSpecName: "kube-api-access-mpfdr") pod "4716ce55-8666-4ad9-866b-e2f3f88cd5e7" (UID: "4716ce55-8666-4ad9-866b-e2f3f88cd5e7"). InnerVolumeSpecName "kube-api-access-mpfdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.090626 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a3b77b-71a0-4f39-8356-f7caa43d72a4-kube-api-access-4fsl8" (OuterVolumeSpecName: "kube-api-access-4fsl8") pod "55a3b77b-71a0-4f39-8356-f7caa43d72a4" (UID: "55a3b77b-71a0-4f39-8356-f7caa43d72a4"). InnerVolumeSpecName "kube-api-access-4fsl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.183515 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.183565 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fsl8\" (UniqueName: \"kubernetes.io/projected/55a3b77b-71a0-4f39-8356-f7caa43d72a4-kube-api-access-4fsl8\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.183589 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a3b77b-71a0-4f39-8356-f7caa43d72a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.183610 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpfdr\" (UniqueName: \"kubernetes.io/projected/4716ce55-8666-4ad9-866b-e2f3f88cd5e7-kube-api-access-mpfdr\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.411593 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-d6s7z" event={"ID":"4716ce55-8666-4ad9-866b-e2f3f88cd5e7","Type":"ContainerDied","Data":"a03754fbdfd14fe157ce3c4574d7053e21c33d8d8993642e92b21821bc55ecd2"} Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.411657 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a03754fbdfd14fe157ce3c4574d7053e21c33d8d8993642e92b21821bc55ecd2" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.411805 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-d6s7z" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.413936 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" event={"ID":"55a3b77b-71a0-4f39-8356-f7caa43d72a4","Type":"ContainerDied","Data":"b77f789d3b6464657cb7e9adcf5a9469b1141df3d9d7a123bbcf5af755efe7f6"} Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.413981 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77f789d3b6464657cb7e9adcf5a9469b1141df3d9d7a123bbcf5af755efe7f6" Mar 09 19:09:12 crc kubenswrapper[4821]: I0309 19:09:12.414011 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.134184 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg"] Mar 09 19:09:14 crc kubenswrapper[4821]: E0309 19:09:14.134769 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a3b77b-71a0-4f39-8356-f7caa43d72a4" containerName="mariadb-account-create-update" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.134782 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a3b77b-71a0-4f39-8356-f7caa43d72a4" containerName="mariadb-account-create-update" Mar 09 19:09:14 crc kubenswrapper[4821]: E0309 19:09:14.134799 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4716ce55-8666-4ad9-866b-e2f3f88cd5e7" containerName="mariadb-database-create" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.134805 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4716ce55-8666-4ad9-866b-e2f3f88cd5e7" containerName="mariadb-database-create" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.134966 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a3b77b-71a0-4f39-8356-f7caa43d72a4" containerName="mariadb-account-create-update" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.134981 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4716ce55-8666-4ad9-866b-e2f3f88cd5e7" containerName="mariadb-database-create" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.135591 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.138957 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.139175 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7jqp9" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.151446 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg"] Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.235886 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-db-sync-config-data\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.236000 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5kdt\" (UniqueName: \"kubernetes.io/projected/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-kube-api-access-l5kdt\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.236028 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-config-data\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.236060 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.337213 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5kdt\" (UniqueName: \"kubernetes.io/projected/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-kube-api-access-l5kdt\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.337277 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-config-data\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.337345 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.337395 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-db-sync-config-data\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.342778 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.343467 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-config-data\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.360089 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-db-sync-config-data\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.366182 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5kdt\" (UniqueName: \"kubernetes.io/projected/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-kube-api-access-l5kdt\") pod \"watcher-kuttl-db-sync-s4tfg\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.450142 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:14 crc kubenswrapper[4821]: I0309 19:09:14.901820 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg"] Mar 09 19:09:14 crc kubenswrapper[4821]: W0309 19:09:14.903751 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0be9d48_e1cb_4f54_9efc_4adf16d4c997.slice/crio-fa278ae0e886bf6d8f28de401e16807668642621ecb16c20b9a63ff7a313b845 WatchSource:0}: Error finding container fa278ae0e886bf6d8f28de401e16807668642621ecb16c20b9a63ff7a313b845: Status 404 returned error can't find the container with id fa278ae0e886bf6d8f28de401e16807668642621ecb16c20b9a63ff7a313b845 Mar 09 19:09:15 crc kubenswrapper[4821]: I0309 19:09:15.440936 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" event={"ID":"c0be9d48-e1cb-4f54-9efc-4adf16d4c997","Type":"ContainerStarted","Data":"4691d310cbf12fc1c20da06742d904a91e5960476d6b8dccc42642d62e077073"} Mar 09 19:09:15 crc kubenswrapper[4821]: I0309 19:09:15.440983 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" event={"ID":"c0be9d48-e1cb-4f54-9efc-4adf16d4c997","Type":"ContainerStarted","Data":"fa278ae0e886bf6d8f28de401e16807668642621ecb16c20b9a63ff7a313b845"} Mar 09 19:09:15 crc kubenswrapper[4821]: I0309 19:09:15.457719 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" podStartSLOduration=1.457697908 podStartE2EDuration="1.457697908s" podCreationTimestamp="2026-03-09 19:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:15.453445653 +0000 UTC m=+2692.614821539" watchObservedRunningTime="2026-03-09 19:09:15.457697908 +0000 UTC m=+2692.619073794" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.456291 4821 generic.go:334] "Generic (PLEG): container finished" podID="c0be9d48-e1cb-4f54-9efc-4adf16d4c997" containerID="4691d310cbf12fc1c20da06742d904a91e5960476d6b8dccc42642d62e077073" exitCode=0 Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.456400 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" event={"ID":"c0be9d48-e1cb-4f54-9efc-4adf16d4c997","Type":"ContainerDied","Data":"4691d310cbf12fc1c20da06742d904a91e5960476d6b8dccc42642d62e077073"} Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.756676 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p82dx"] Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.759288 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.768048 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p82dx"] Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.894432 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cmn9\" (UniqueName: \"kubernetes.io/projected/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-kube-api-access-9cmn9\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.894502 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-catalog-content\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.894913 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-utilities\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.997008 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cmn9\" (UniqueName: \"kubernetes.io/projected/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-kube-api-access-9cmn9\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.997078 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-catalog-content\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.997214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-utilities\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.997778 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-catalog-content\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:17 crc kubenswrapper[4821]: I0309 19:09:17.997826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-utilities\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.015086 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cmn9\" (UniqueName: \"kubernetes.io/projected/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-kube-api-access-9cmn9\") pod \"redhat-operators-p82dx\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.082031 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.572152 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p82dx"] Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.770553 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.917857 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-combined-ca-bundle\") pod \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.917993 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-db-sync-config-data\") pod \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.918091 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5kdt\" (UniqueName: \"kubernetes.io/projected/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-kube-api-access-l5kdt\") pod \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.918113 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-config-data\") pod \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\" (UID: \"c0be9d48-e1cb-4f54-9efc-4adf16d4c997\") " Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.924539 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c0be9d48-e1cb-4f54-9efc-4adf16d4c997" (UID: "c0be9d48-e1cb-4f54-9efc-4adf16d4c997"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.924603 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-kube-api-access-l5kdt" (OuterVolumeSpecName: "kube-api-access-l5kdt") pod "c0be9d48-e1cb-4f54-9efc-4adf16d4c997" (UID: "c0be9d48-e1cb-4f54-9efc-4adf16d4c997"). InnerVolumeSpecName "kube-api-access-l5kdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.949264 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0be9d48-e1cb-4f54-9efc-4adf16d4c997" (UID: "c0be9d48-e1cb-4f54-9efc-4adf16d4c997"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:18 crc kubenswrapper[4821]: I0309 19:09:18.974391 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-config-data" (OuterVolumeSpecName: "config-data") pod "c0be9d48-e1cb-4f54-9efc-4adf16d4c997" (UID: "c0be9d48-e1cb-4f54-9efc-4adf16d4c997"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.019685 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.019730 4821 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.019744 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5kdt\" (UniqueName: \"kubernetes.io/projected/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-kube-api-access-l5kdt\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.019760 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0be9d48-e1cb-4f54-9efc-4adf16d4c997-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.474870 4821 generic.go:334] "Generic (PLEG): container finished" podID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerID="0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9" exitCode=0 Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.475015 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p82dx" event={"ID":"9dac56dc-1f53-44b2-b4ab-1d95102d6a03","Type":"ContainerDied","Data":"0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9"} Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.475211 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p82dx" event={"ID":"9dac56dc-1f53-44b2-b4ab-1d95102d6a03","Type":"ContainerStarted","Data":"32ee4bbb71c99b60732ddc0f46eff9d7773b56f50d735092f096e38054e6bbe0"} Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.477431 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" event={"ID":"c0be9d48-e1cb-4f54-9efc-4adf16d4c997","Type":"ContainerDied","Data":"fa278ae0e886bf6d8f28de401e16807668642621ecb16c20b9a63ff7a313b845"} Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.477450 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa278ae0e886bf6d8f28de401e16807668642621ecb16c20b9a63ff7a313b845" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.477513 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.717966 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:19 crc kubenswrapper[4821]: E0309 19:09:19.718386 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0be9d48-e1cb-4f54-9efc-4adf16d4c997" containerName="watcher-kuttl-db-sync" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.718405 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0be9d48-e1cb-4f54-9efc-4adf16d4c997" containerName="watcher-kuttl-db-sync" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.718576 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0be9d48-e1cb-4f54-9efc-4adf16d4c997" containerName="watcher-kuttl-db-sync" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.719469 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.721539 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7jqp9" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.729997 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.743508 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.745004 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.750047 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.751552 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.754188 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.762981 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.772599 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.781600 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.830668 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.831936 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839185 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839230 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-logs\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839262 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839281 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839307 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvnl\" (UniqueName: \"kubernetes.io/projected/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-kube-api-access-xwvnl\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839365 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839387 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839403 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnhwl\" (UniqueName: \"kubernetes.io/projected/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-kube-api-access-pnhwl\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839422 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839442 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839465 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.839481 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.841987 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.852510 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.940969 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941021 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941041 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941062 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941106 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941125 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941150 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f465a4a-1555-4736-a32b-08bd0456ac89-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941165 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941182 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941199 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941215 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941232 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-logs\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941252 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941275 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941289 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941312 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941349 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvnl\" (UniqueName: \"kubernetes.io/projected/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-kube-api-access-xwvnl\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941366 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8q6\" (UniqueName: \"kubernetes.io/projected/7f465a4a-1555-4736-a32b-08bd0456ac89-kube-api-access-lm8q6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941396 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941410 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941430 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnhwl\" (UniqueName: \"kubernetes.io/projected/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-kube-api-access-pnhwl\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941444 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5ms\" (UniqueName: \"kubernetes.io/projected/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-kube-api-access-zb5ms\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.941464 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.942226 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-logs\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.942833 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.948063 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.948201 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.948542 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.948844 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.949057 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.950353 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.951809 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.958958 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.977962 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnhwl\" (UniqueName: \"kubernetes.io/projected/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-kube-api-access-pnhwl\") pod \"watcher-kuttl-api-0\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:19 crc kubenswrapper[4821]: I0309 19:09:19.981853 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvnl\" (UniqueName: \"kubernetes.io/projected/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-kube-api-access-xwvnl\") pod \"watcher-kuttl-api-1\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.040868 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042391 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042462 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042517 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f465a4a-1555-4736-a32b-08bd0456ac89-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042535 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042551 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042569 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042597 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042621 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042645 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8q6\" (UniqueName: \"kubernetes.io/projected/7f465a4a-1555-4736-a32b-08bd0456ac89-kube-api-access-lm8q6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.042681 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb5ms\" (UniqueName: \"kubernetes.io/projected/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-kube-api-access-zb5ms\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.043601 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f465a4a-1555-4736-a32b-08bd0456ac89-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.043783 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.046222 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.046278 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.046516 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.047383 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.048032 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.049009 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.050495 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.065900 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8q6\" (UniqueName: \"kubernetes.io/projected/7f465a4a-1555-4736-a32b-08bd0456ac89-kube-api-access-lm8q6\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.072482 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb5ms\" (UniqueName: \"kubernetes.io/projected/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-kube-api-access-zb5ms\") pod \"watcher-kuttl-applier-0\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.077905 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.086339 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.164818 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.581879 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.665258 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:20 crc kubenswrapper[4821]: W0309 19:09:20.676853 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f465a4a_1555_4736_a32b_08bd0456ac89.slice/crio-4a2427c0c60a701c29e5c0927e0bb2013e05a8a12f185748898d2c75d4c6b025 WatchSource:0}: Error finding container 4a2427c0c60a701c29e5c0927e0bb2013e05a8a12f185748898d2c75d4c6b025: Status 404 returned error can't find the container with id 4a2427c0c60a701c29e5c0927e0bb2013e05a8a12f185748898d2c75d4c6b025 Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.721968 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:09:20 crc kubenswrapper[4821]: W0309 19:09:20.747418 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bfa8bd2_ec0d_4052_aca5_ddf91815e698.slice/crio-5e8fe3022c1ad3e635d9fbc0219752bedd8f4991526f62e4efc07959fdea3bdb WatchSource:0}: Error finding container 5e8fe3022c1ad3e635d9fbc0219752bedd8f4991526f62e4efc07959fdea3bdb: Status 404 returned error can't find the container with id 5e8fe3022c1ad3e635d9fbc0219752bedd8f4991526f62e4efc07959fdea3bdb Mar 09 19:09:20 crc kubenswrapper[4821]: I0309 19:09:20.752038 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.494352 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"5bfa8bd2-ec0d-4052-aca5-ddf91815e698","Type":"ContainerStarted","Data":"359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.494701 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"5bfa8bd2-ec0d-4052-aca5-ddf91815e698","Type":"ContainerStarted","Data":"c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.494721 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"5bfa8bd2-ec0d-4052-aca5-ddf91815e698","Type":"ContainerStarted","Data":"5e8fe3022c1ad3e635d9fbc0219752bedd8f4991526f62e4efc07959fdea3bdb"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.495294 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.497059 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7f465a4a-1555-4736-a32b-08bd0456ac89","Type":"ContainerStarted","Data":"ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.497097 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7f465a4a-1555-4736-a32b-08bd0456ac89","Type":"ContainerStarted","Data":"4a2427c0c60a701c29e5c0927e0bb2013e05a8a12f185748898d2c75d4c6b025"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.500490 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2","Type":"ContainerStarted","Data":"32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.500539 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2","Type":"ContainerStarted","Data":"75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.500549 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2","Type":"ContainerStarted","Data":"006622a4f8d4cb222ef48e982a0391477c462fb9da36de0f643cf7bf929a5c7c"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.500776 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.502661 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"70c7e606-d70f-4bdd-8bc5-6456f4c0a253","Type":"ContainerStarted","Data":"45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.502709 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"70c7e606-d70f-4bdd-8bc5-6456f4c0a253","Type":"ContainerStarted","Data":"a90750c84ff520e38078446fd13b3d1f557041f7636ca8eb82eaa2c4fdc206d0"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.505782 4821 generic.go:334] "Generic (PLEG): container finished" podID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerID="5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7" exitCode=0 Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.505840 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p82dx" event={"ID":"9dac56dc-1f53-44b2-b4ab-1d95102d6a03","Type":"ContainerDied","Data":"5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7"} Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.519263 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.51924352 podStartE2EDuration="2.51924352s" podCreationTimestamp="2026-03-09 19:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:21.515264982 +0000 UTC m=+2698.676640838" watchObservedRunningTime="2026-03-09 19:09:21.51924352 +0000 UTC m=+2698.680619376" Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.539249 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.539225742 podStartE2EDuration="2.539225742s" podCreationTimestamp="2026-03-09 19:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:21.535278755 +0000 UTC m=+2698.696654611" watchObservedRunningTime="2026-03-09 19:09:21.539225742 +0000 UTC m=+2698.700601608" Mar 09 19:09:21 crc kubenswrapper[4821]: I0309 19:09:21.575368 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.5753503909999997 podStartE2EDuration="2.575350391s" podCreationTimestamp="2026-03-09 19:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:21.56940444 +0000 UTC m=+2698.730780296" watchObservedRunningTime="2026-03-09 19:09:21.575350391 +0000 UTC m=+2698.736726247" Mar 09 19:09:22 crc kubenswrapper[4821]: I0309 19:09:22.529894 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p82dx" event={"ID":"9dac56dc-1f53-44b2-b4ab-1d95102d6a03","Type":"ContainerStarted","Data":"6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba"} Mar 09 19:09:22 crc kubenswrapper[4821]: I0309 19:09:22.547928 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.547907611 podStartE2EDuration="3.547907611s" podCreationTimestamp="2026-03-09 19:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:21.61368434 +0000 UTC m=+2698.775060196" watchObservedRunningTime="2026-03-09 19:09:22.547907611 +0000 UTC m=+2699.709283467" Mar 09 19:09:22 crc kubenswrapper[4821]: I0309 19:09:22.551077 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p82dx" podStartSLOduration=3.065495618 podStartE2EDuration="5.551066286s" podCreationTimestamp="2026-03-09 19:09:17 +0000 UTC" firstStartedPulling="2026-03-09 19:09:19.477427129 +0000 UTC m=+2696.638802975" lastFinishedPulling="2026-03-09 19:09:21.962997787 +0000 UTC m=+2699.124373643" observedRunningTime="2026-03-09 19:09:22.545790353 +0000 UTC m=+2699.707166209" watchObservedRunningTime="2026-03-09 19:09:22.551066286 +0000 UTC m=+2699.712442142" Mar 09 19:09:23 crc kubenswrapper[4821]: I0309 19:09:23.538182 4821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 19:09:23 crc kubenswrapper[4821]: I0309 19:09:23.620892 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:23 crc kubenswrapper[4821]: I0309 19:09:23.835258 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:24 crc kubenswrapper[4821]: I0309 19:09:24.559729 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:09:24 crc kubenswrapper[4821]: E0309 19:09:24.560187 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:09:25 crc kubenswrapper[4821]: I0309 19:09:25.042635 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:25 crc kubenswrapper[4821]: I0309 19:09:25.079159 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:25 crc kubenswrapper[4821]: I0309 19:09:25.165031 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:28 crc kubenswrapper[4821]: I0309 19:09:28.082950 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:28 crc kubenswrapper[4821]: I0309 19:09:28.084656 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:29 crc kubenswrapper[4821]: I0309 19:09:29.159743 4821 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p82dx" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="registry-server" probeResult="failure" output=< Mar 09 19:09:29 crc kubenswrapper[4821]: timeout: failed to connect service ":50051" within 1s Mar 09 19:09:29 crc kubenswrapper[4821]: > Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.041971 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.047216 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.079015 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.087447 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.090836 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.121407 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.165807 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.198707 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.615243 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.625243 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.625387 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.668746 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:09:30 crc kubenswrapper[4821]: I0309 19:09:30.668907 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:33 crc kubenswrapper[4821]: I0309 19:09:33.981105 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:33 crc kubenswrapper[4821]: I0309 19:09:33.981798 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-central-agent" containerID="cri-o://974366854c8821bd17d233956e156092a187419448d3a66b88f2c7191a3baac3" gracePeriod=30 Mar 09 19:09:33 crc kubenswrapper[4821]: I0309 19:09:33.981860 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="sg-core" containerID="cri-o://e1f905b108ca545f4199903d6c7592c89e0454d2f9d302ddfd0a777cdb3ddfea" gracePeriod=30 Mar 09 19:09:33 crc kubenswrapper[4821]: I0309 19:09:33.981947 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-notification-agent" containerID="cri-o://3ec1f1f452fc850609ca2598615aead194489a6da0bed980239c36031f0aef18" gracePeriod=30 Mar 09 19:09:33 crc kubenswrapper[4821]: I0309 19:09:33.981969 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="proxy-httpd" containerID="cri-o://f97e4bf6575fc4f665b81de8ce8623d441931b3f3621ff33c7f62e93cf5ab791" gracePeriod=30 Mar 09 19:09:33 crc kubenswrapper[4821]: I0309 19:09:33.999001 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.8:3000/\": read tcp 10.217.0.2:59312->10.217.1.8:3000: read: connection reset by peer" Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.669109 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerID="f97e4bf6575fc4f665b81de8ce8623d441931b3f3621ff33c7f62e93cf5ab791" exitCode=0 Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.669143 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerID="e1f905b108ca545f4199903d6c7592c89e0454d2f9d302ddfd0a777cdb3ddfea" exitCode=2 Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.669151 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerID="974366854c8821bd17d233956e156092a187419448d3a66b88f2c7191a3baac3" exitCode=0 Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.669170 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerDied","Data":"f97e4bf6575fc4f665b81de8ce8623d441931b3f3621ff33c7f62e93cf5ab791"} Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.669192 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerDied","Data":"e1f905b108ca545f4199903d6c7592c89e0454d2f9d302ddfd0a777cdb3ddfea"} Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.669201 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerDied","Data":"974366854c8821bd17d233956e156092a187419448d3a66b88f2c7191a3baac3"} Mar 09 19:09:34 crc kubenswrapper[4821]: I0309 19:09:34.671890 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.8:3000/\": dial tcp 10.217.1.8:3000: connect: connection refused" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.681776 4821 generic.go:334] "Generic (PLEG): container finished" podID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerID="3ec1f1f452fc850609ca2598615aead194489a6da0bed980239c36031f0aef18" exitCode=0 Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.682138 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerDied","Data":"3ec1f1f452fc850609ca2598615aead194489a6da0bed980239c36031f0aef18"} Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.682207 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c6f5de90-1d37-46e6-9092-89b35c6dce9c","Type":"ContainerDied","Data":"3bb1a391165383681887e2abfae6e04ede563ea252ba975289a3467504277e1d"} Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.682224 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb1a391165383681887e2abfae6e04ede563ea252ba975289a3467504277e1d" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.729294 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.774225 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvwcx\" (UniqueName: \"kubernetes.io/projected/c6f5de90-1d37-46e6-9092-89b35c6dce9c-kube-api-access-lvwcx\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.774509 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-run-httpd\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.774554 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-sg-core-conf-yaml\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.774702 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-scripts\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.774823 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-config-data\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.774979 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-log-httpd\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.775012 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-combined-ca-bundle\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.775034 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-ceilometer-tls-certs\") pod \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\" (UID: \"c6f5de90-1d37-46e6-9092-89b35c6dce9c\") " Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.776802 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.777369 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.781140 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f5de90-1d37-46e6-9092-89b35c6dce9c-kube-api-access-lvwcx" (OuterVolumeSpecName: "kube-api-access-lvwcx") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "kube-api-access-lvwcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.783156 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-scripts" (OuterVolumeSpecName: "scripts") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.799505 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.816409 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.836125 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.855482 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-config-data" (OuterVolumeSpecName: "config-data") pod "c6f5de90-1d37-46e6-9092-89b35c6dce9c" (UID: "c6f5de90-1d37-46e6-9092-89b35c6dce9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877016 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877051 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877065 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvwcx\" (UniqueName: \"kubernetes.io/projected/c6f5de90-1d37-46e6-9092-89b35c6dce9c-kube-api-access-lvwcx\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877081 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877093 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877104 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877115 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6f5de90-1d37-46e6-9092-89b35c6dce9c-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:35 crc kubenswrapper[4821]: I0309 19:09:35.877126 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6f5de90-1d37-46e6-9092-89b35c6dce9c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.689744 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.732873 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.742530 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.781584 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:36 crc kubenswrapper[4821]: E0309 19:09:36.782534 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="sg-core" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782561 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="sg-core" Mar 09 19:09:36 crc kubenswrapper[4821]: E0309 19:09:36.782585 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-notification-agent" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782596 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-notification-agent" Mar 09 19:09:36 crc kubenswrapper[4821]: E0309 19:09:36.782620 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-central-agent" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782633 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-central-agent" Mar 09 19:09:36 crc kubenswrapper[4821]: E0309 19:09:36.782658 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="proxy-httpd" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782668 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="proxy-httpd" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782920 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="sg-core" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782947 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-notification-agent" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782973 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="ceilometer-central-agent" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.782987 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" containerName="proxy-httpd" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.785159 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.789673 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.793437 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.793437 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.835419 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896135 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896187 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjpq4\" (UniqueName: \"kubernetes.io/projected/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-kube-api-access-xjpq4\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896284 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-scripts\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896333 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-run-httpd\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896355 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-config-data\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896387 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896411 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.896454 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-log-httpd\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.997956 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-scripts\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998013 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-run-httpd\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998039 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-config-data\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998077 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998103 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998148 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-log-httpd\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998187 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.998214 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjpq4\" (UniqueName: \"kubernetes.io/projected/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-kube-api-access-xjpq4\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:36 crc kubenswrapper[4821]: I0309 19:09:36.999734 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-log-httpd\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.000449 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-run-httpd\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.003849 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.003890 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-scripts\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.004752 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.005382 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.013634 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-config-data\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.020157 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjpq4\" (UniqueName: \"kubernetes.io/projected/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-kube-api-access-xjpq4\") pod \"ceilometer-0\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.106342 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.560940 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f5de90-1d37-46e6-9092-89b35c6dce9c" path="/var/lib/kubelet/pods/c6f5de90-1d37-46e6-9092-89b35c6dce9c/volumes" Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.611095 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:37 crc kubenswrapper[4821]: W0309 19:09:37.614419 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c79e2d2_49cf_452a_8d6e_7cceaad4c479.slice/crio-f87510f9e0c6c0f3509bbacd227e1a4734c4ca26c7e4d9a86eed43a963269198 WatchSource:0}: Error finding container f87510f9e0c6c0f3509bbacd227e1a4734c4ca26c7e4d9a86eed43a963269198: Status 404 returned error can't find the container with id f87510f9e0c6c0f3509bbacd227e1a4734c4ca26c7e4d9a86eed43a963269198 Mar 09 19:09:37 crc kubenswrapper[4821]: I0309 19:09:37.702502 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerStarted","Data":"f87510f9e0c6c0f3509bbacd227e1a4734c4ca26c7e4d9a86eed43a963269198"} Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.135198 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.185808 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.409681 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.411566 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.440042 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.521569 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.521658 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.521702 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.521822 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzqp\" (UniqueName: \"kubernetes.io/projected/bc0e5c25-e99b-42f0-95e2-737b34d083df-kube-api-access-tjzqp\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.521960 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0e5c25-e99b-42f0-95e2-737b34d083df-logs\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.522003 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.551936 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:09:38 crc kubenswrapper[4821]: E0309 19:09:38.552260 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.623904 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0e5c25-e99b-42f0-95e2-737b34d083df-logs\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.623960 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.624027 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.624058 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.624093 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.624248 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzqp\" (UniqueName: \"kubernetes.io/projected/bc0e5c25-e99b-42f0-95e2-737b34d083df-kube-api-access-tjzqp\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.624606 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0e5c25-e99b-42f0-95e2-737b34d083df-logs\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.633505 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.634847 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.653663 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.654266 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.664837 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzqp\" (UniqueName: \"kubernetes.io/projected/bc0e5c25-e99b-42f0-95e2-737b34d083df-kube-api-access-tjzqp\") pod \"watcher-kuttl-api-2\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.711335 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerStarted","Data":"fbd622702f2b37b12de23e2f340aec7c1afcc02e8cf0bdb1713cde5748cf62e2"} Mar 09 19:09:38 crc kubenswrapper[4821]: I0309 19:09:38.765293 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.192590 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Mar 09 19:09:39 crc kubenswrapper[4821]: W0309 19:09:39.201759 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc0e5c25_e99b_42f0_95e2_737b34d083df.slice/crio-324c4ec7be9963a3135c550824cbf9afa0bdf9c1b40275756ff4d1ea8053c923 WatchSource:0}: Error finding container 324c4ec7be9963a3135c550824cbf9afa0bdf9c1b40275756ff4d1ea8053c923: Status 404 returned error can't find the container with id 324c4ec7be9963a3135c550824cbf9afa0bdf9c1b40275756ff4d1ea8053c923 Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.724089 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"bc0e5c25-e99b-42f0-95e2-737b34d083df","Type":"ContainerStarted","Data":"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014"} Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.724438 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"bc0e5c25-e99b-42f0-95e2-737b34d083df","Type":"ContainerStarted","Data":"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e"} Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.724454 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"bc0e5c25-e99b-42f0-95e2-737b34d083df","Type":"ContainerStarted","Data":"324c4ec7be9963a3135c550824cbf9afa0bdf9c1b40275756ff4d1ea8053c923"} Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.725882 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.728105 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerStarted","Data":"6928b22c08489b7464fd945951834e8c6aaf3395f8930bda78ed9ce9d9b91224"} Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.728132 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerStarted","Data":"757e5951a70fef0ff5c616a05348440c175f08e3f0d5a7bdb9d3a9cdbb641a9e"} Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.740075 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.1.18:9322/\": dial tcp 10.217.1.18:9322: connect: connection refused" Mar 09 19:09:39 crc kubenswrapper[4821]: I0309 19:09:39.745847 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=1.745836052 podStartE2EDuration="1.745836052s" podCreationTimestamp="2026-03-09 19:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:39.742861862 +0000 UTC m=+2716.904237718" watchObservedRunningTime="2026-03-09 19:09:39.745836052 +0000 UTC m=+2716.907211908" Mar 09 19:09:41 crc kubenswrapper[4821]: I0309 19:09:41.740453 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p82dx"] Mar 09 19:09:41 crc kubenswrapper[4821]: I0309 19:09:41.743029 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p82dx" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="registry-server" containerID="cri-o://6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba" gracePeriod=2 Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.175865 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.279975 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-catalog-content\") pod \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.280417 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cmn9\" (UniqueName: \"kubernetes.io/projected/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-kube-api-access-9cmn9\") pod \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.280482 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-utilities\") pod \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\" (UID: \"9dac56dc-1f53-44b2-b4ab-1d95102d6a03\") " Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.281340 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-utilities" (OuterVolumeSpecName: "utilities") pod "9dac56dc-1f53-44b2-b4ab-1d95102d6a03" (UID: "9dac56dc-1f53-44b2-b4ab-1d95102d6a03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.286829 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-kube-api-access-9cmn9" (OuterVolumeSpecName: "kube-api-access-9cmn9") pod "9dac56dc-1f53-44b2-b4ab-1d95102d6a03" (UID: "9dac56dc-1f53-44b2-b4ab-1d95102d6a03"). InnerVolumeSpecName "kube-api-access-9cmn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.384723 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cmn9\" (UniqueName: \"kubernetes.io/projected/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-kube-api-access-9cmn9\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.384756 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.425111 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dac56dc-1f53-44b2-b4ab-1d95102d6a03" (UID: "9dac56dc-1f53-44b2-b4ab-1d95102d6a03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.486446 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dac56dc-1f53-44b2-b4ab-1d95102d6a03-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.757865 4821 generic.go:334] "Generic (PLEG): container finished" podID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerID="6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba" exitCode=0 Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.757923 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p82dx" event={"ID":"9dac56dc-1f53-44b2-b4ab-1d95102d6a03","Type":"ContainerDied","Data":"6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba"} Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.757953 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p82dx" event={"ID":"9dac56dc-1f53-44b2-b4ab-1d95102d6a03","Type":"ContainerDied","Data":"32ee4bbb71c99b60732ddc0f46eff9d7773b56f50d735092f096e38054e6bbe0"} Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.757973 4821 scope.go:117] "RemoveContainer" containerID="6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.758126 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p82dx" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.768234 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerStarted","Data":"51209f9160e659f2f422d7286f76e40ed49b03e4883162fac1cf03be67727a7e"} Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.768398 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.802706 4821 scope.go:117] "RemoveContainer" containerID="5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.812374 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p82dx"] Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.823162 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p82dx"] Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.842934 4821 scope.go:117] "RemoveContainer" containerID="0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.843950 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.81560136 podStartE2EDuration="6.843926633s" podCreationTimestamp="2026-03-09 19:09:36 +0000 UTC" firstStartedPulling="2026-03-09 19:09:37.617197928 +0000 UTC m=+2714.778573784" lastFinishedPulling="2026-03-09 19:09:41.645523201 +0000 UTC m=+2718.806899057" observedRunningTime="2026-03-09 19:09:42.835667259 +0000 UTC m=+2719.997043135" watchObservedRunningTime="2026-03-09 19:09:42.843926633 +0000 UTC m=+2720.005302489" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.886774 4821 scope.go:117] "RemoveContainer" containerID="6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba" Mar 09 19:09:42 crc kubenswrapper[4821]: E0309 19:09:42.887694 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba\": container with ID starting with 6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba not found: ID does not exist" containerID="6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.887727 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba"} err="failed to get container status \"6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba\": rpc error: code = NotFound desc = could not find container \"6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba\": container with ID starting with 6789daf80dc3451bbbb1f060de39e1d31044b2a2906e67a601afa494f2595cba not found: ID does not exist" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.887745 4821 scope.go:117] "RemoveContainer" containerID="5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7" Mar 09 19:09:42 crc kubenswrapper[4821]: E0309 19:09:42.888135 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7\": container with ID starting with 5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7 not found: ID does not exist" containerID="5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.888262 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7"} err="failed to get container status \"5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7\": rpc error: code = NotFound desc = could not find container \"5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7\": container with ID starting with 5896c3be44ef82b837a5d11b5161366395a369193a95602899320978e72491a7 not found: ID does not exist" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.888396 4821 scope.go:117] "RemoveContainer" containerID="0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9" Mar 09 19:09:42 crc kubenswrapper[4821]: E0309 19:09:42.888797 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9\": container with ID starting with 0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9 not found: ID does not exist" containerID="0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9" Mar 09 19:09:42 crc kubenswrapper[4821]: I0309 19:09:42.888897 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9"} err="failed to get container status \"0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9\": rpc error: code = NotFound desc = could not find container \"0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9\": container with ID starting with 0490a1d8fd776e784a66bc1e32fe75954f939caa0bf775d290fc2220b5f8fce9 not found: ID does not exist" Mar 09 19:09:43 crc kubenswrapper[4821]: I0309 19:09:43.181648 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:43 crc kubenswrapper[4821]: I0309 19:09:43.566103 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" path="/var/lib/kubelet/pods/9dac56dc-1f53-44b2-b4ab-1d95102d6a03/volumes" Mar 09 19:09:43 crc kubenswrapper[4821]: I0309 19:09:43.766314 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:48 crc kubenswrapper[4821]: I0309 19:09:48.765793 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:48 crc kubenswrapper[4821]: I0309 19:09:48.776660 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:48 crc kubenswrapper[4821]: I0309 19:09:48.846784 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.371502 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.381259 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.381558 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-kuttl-api-log" containerID="cri-o://c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d" gracePeriod=30 Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.381650 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-api" containerID="cri-o://359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619" gracePeriod=30 Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.849138 4821 generic.go:334] "Generic (PLEG): container finished" podID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerID="c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d" exitCode=143 Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.849210 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"5bfa8bd2-ec0d-4052-aca5-ddf91815e698","Type":"ContainerDied","Data":"c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d"} Mar 09 19:09:49 crc kubenswrapper[4821]: I0309 19:09:49.892796 4821 scope.go:117] "RemoveContainer" containerID="6930d9264f7daf08da5ebe160cbeba30c0e39badcfa9a0070e5975bd1d936f96" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.212406 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.253608 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-config-data\") pod \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.253658 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvnl\" (UniqueName: \"kubernetes.io/projected/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-kube-api-access-xwvnl\") pod \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.253699 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-custom-prometheus-ca\") pod \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.253756 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-combined-ca-bundle\") pod \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.253780 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-logs\") pod \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.253821 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-cert-memcached-mtls\") pod \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\" (UID: \"5bfa8bd2-ec0d-4052-aca5-ddf91815e698\") " Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.268272 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-logs" (OuterVolumeSpecName: "logs") pod "5bfa8bd2-ec0d-4052-aca5-ddf91815e698" (UID: "5bfa8bd2-ec0d-4052-aca5-ddf91815e698"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.270051 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.276593 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-kube-api-access-xwvnl" (OuterVolumeSpecName: "kube-api-access-xwvnl") pod "5bfa8bd2-ec0d-4052-aca5-ddf91815e698" (UID: "5bfa8bd2-ec0d-4052-aca5-ddf91815e698"). InnerVolumeSpecName "kube-api-access-xwvnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.282158 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5bfa8bd2-ec0d-4052-aca5-ddf91815e698" (UID: "5bfa8bd2-ec0d-4052-aca5-ddf91815e698"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.299283 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bfa8bd2-ec0d-4052-aca5-ddf91815e698" (UID: "5bfa8bd2-ec0d-4052-aca5-ddf91815e698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.323615 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-config-data" (OuterVolumeSpecName: "config-data") pod "5bfa8bd2-ec0d-4052-aca5-ddf91815e698" (UID: "5bfa8bd2-ec0d-4052-aca5-ddf91815e698"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.341241 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5bfa8bd2-ec0d-4052-aca5-ddf91815e698" (UID: "5bfa8bd2-ec0d-4052-aca5-ddf91815e698"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.371294 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.371332 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvnl\" (UniqueName: \"kubernetes.io/projected/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-kube-api-access-xwvnl\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.371345 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.371354 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.371362 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5bfa8bd2-ec0d-4052-aca5-ddf91815e698-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.552548 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:09:50 crc kubenswrapper[4821]: E0309 19:09:50.553240 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.861971 4821 generic.go:334] "Generic (PLEG): container finished" podID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerID="359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619" exitCode=0 Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.862033 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"5bfa8bd2-ec0d-4052-aca5-ddf91815e698","Type":"ContainerDied","Data":"359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619"} Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.862856 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"5bfa8bd2-ec0d-4052-aca5-ddf91815e698","Type":"ContainerDied","Data":"5e8fe3022c1ad3e635d9fbc0219752bedd8f4991526f62e4efc07959fdea3bdb"} Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.862081 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.862904 4821 scope.go:117] "RemoveContainer" containerID="359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.863283 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-kuttl-api-log" containerID="cri-o://cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e" gracePeriod=30 Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.863397 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-api" containerID="cri-o://7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014" gracePeriod=30 Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.891054 4821 scope.go:117] "RemoveContainer" containerID="c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.909009 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.918412 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.920794 4821 scope.go:117] "RemoveContainer" containerID="359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619" Mar 09 19:09:50 crc kubenswrapper[4821]: E0309 19:09:50.921417 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619\": container with ID starting with 359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619 not found: ID does not exist" containerID="359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.921466 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619"} err="failed to get container status \"359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619\": rpc error: code = NotFound desc = could not find container \"359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619\": container with ID starting with 359584e4ddd88b116702047c4d2e4548c8a852643a1681d3e7fdf9a68d770619 not found: ID does not exist" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.921497 4821 scope.go:117] "RemoveContainer" containerID="c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d" Mar 09 19:09:50 crc kubenswrapper[4821]: E0309 19:09:50.921932 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d\": container with ID starting with c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d not found: ID does not exist" containerID="c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d" Mar 09 19:09:50 crc kubenswrapper[4821]: I0309 19:09:50.921964 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d"} err="failed to get container status \"c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d\": rpc error: code = NotFound desc = could not find container \"c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d\": container with ID starting with c40052dd1230a115af913511fefb28a6cbafb3d71c677adf5156e45eb4c7f18d not found: ID does not exist" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.563843 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" path="/var/lib/kubelet/pods/5bfa8bd2-ec0d-4052-aca5-ddf91815e698/volumes" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.780482 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872471 4821 generic.go:334] "Generic (PLEG): container finished" podID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerID="7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014" exitCode=0 Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872502 4821 generic.go:334] "Generic (PLEG): container finished" podID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerID="cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e" exitCode=143 Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872520 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"bc0e5c25-e99b-42f0-95e2-737b34d083df","Type":"ContainerDied","Data":"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014"} Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872524 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872541 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"bc0e5c25-e99b-42f0-95e2-737b34d083df","Type":"ContainerDied","Data":"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e"} Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872555 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"bc0e5c25-e99b-42f0-95e2-737b34d083df","Type":"ContainerDied","Data":"324c4ec7be9963a3135c550824cbf9afa0bdf9c1b40275756ff4d1ea8053c923"} Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.872571 4821 scope.go:117] "RemoveContainer" containerID="7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.891552 4821 scope.go:117] "RemoveContainer" containerID="cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.895871 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjzqp\" (UniqueName: \"kubernetes.io/projected/bc0e5c25-e99b-42f0-95e2-737b34d083df-kube-api-access-tjzqp\") pod \"bc0e5c25-e99b-42f0-95e2-737b34d083df\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.895993 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-config-data\") pod \"bc0e5c25-e99b-42f0-95e2-737b34d083df\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.896063 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-combined-ca-bundle\") pod \"bc0e5c25-e99b-42f0-95e2-737b34d083df\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.896143 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0e5c25-e99b-42f0-95e2-737b34d083df-logs\") pod \"bc0e5c25-e99b-42f0-95e2-737b34d083df\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.896259 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-cert-memcached-mtls\") pod \"bc0e5c25-e99b-42f0-95e2-737b34d083df\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.896934 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-custom-prometheus-ca\") pod \"bc0e5c25-e99b-42f0-95e2-737b34d083df\" (UID: \"bc0e5c25-e99b-42f0-95e2-737b34d083df\") " Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.897888 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0e5c25-e99b-42f0-95e2-737b34d083df-logs" (OuterVolumeSpecName: "logs") pod "bc0e5c25-e99b-42f0-95e2-737b34d083df" (UID: "bc0e5c25-e99b-42f0-95e2-737b34d083df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.902584 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0e5c25-e99b-42f0-95e2-737b34d083df-kube-api-access-tjzqp" (OuterVolumeSpecName: "kube-api-access-tjzqp") pod "bc0e5c25-e99b-42f0-95e2-737b34d083df" (UID: "bc0e5c25-e99b-42f0-95e2-737b34d083df"). InnerVolumeSpecName "kube-api-access-tjzqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.911735 4821 scope.go:117] "RemoveContainer" containerID="7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014" Mar 09 19:09:51 crc kubenswrapper[4821]: E0309 19:09:51.912593 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014\": container with ID starting with 7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014 not found: ID does not exist" containerID="7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.912622 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014"} err="failed to get container status \"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014\": rpc error: code = NotFound desc = could not find container \"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014\": container with ID starting with 7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014 not found: ID does not exist" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.912642 4821 scope.go:117] "RemoveContainer" containerID="cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e" Mar 09 19:09:51 crc kubenswrapper[4821]: E0309 19:09:51.912846 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e\": container with ID starting with cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e not found: ID does not exist" containerID="cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.912866 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e"} err="failed to get container status \"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e\": rpc error: code = NotFound desc = could not find container \"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e\": container with ID starting with cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e not found: ID does not exist" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.912878 4821 scope.go:117] "RemoveContainer" containerID="7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.913124 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014"} err="failed to get container status \"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014\": rpc error: code = NotFound desc = could not find container \"7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014\": container with ID starting with 7e400a93410201bab8fc36d473c5d412d6f43bef8a2efd4a91360d61a4e11014 not found: ID does not exist" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.913145 4821 scope.go:117] "RemoveContainer" containerID="cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.913394 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e"} err="failed to get container status \"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e\": rpc error: code = NotFound desc = could not find container \"cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e\": container with ID starting with cd45f0c159c349c187564661a2e70bfa324fb8ff83b766425a2b776b22a1357e not found: ID does not exist" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.933522 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bc0e5c25-e99b-42f0-95e2-737b34d083df" (UID: "bc0e5c25-e99b-42f0-95e2-737b34d083df"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.947103 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc0e5c25-e99b-42f0-95e2-737b34d083df" (UID: "bc0e5c25-e99b-42f0-95e2-737b34d083df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.979816 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "bc0e5c25-e99b-42f0-95e2-737b34d083df" (UID: "bc0e5c25-e99b-42f0-95e2-737b34d083df"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.985558 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-config-data" (OuterVolumeSpecName: "config-data") pod "bc0e5c25-e99b-42f0-95e2-737b34d083df" (UID: "bc0e5c25-e99b-42f0-95e2-737b34d083df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.998782 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.999059 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.999077 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc0e5c25-e99b-42f0-95e2-737b34d083df-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.999092 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.999104 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bc0e5c25-e99b-42f0-95e2-737b34d083df-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:51 crc kubenswrapper[4821]: I0309 19:09:51.999115 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjzqp\" (UniqueName: \"kubernetes.io/projected/bc0e5c25-e99b-42f0-95e2-737b34d083df-kube-api-access-tjzqp\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.204935 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.212383 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.523103 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.523415 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-kuttl-api-log" containerID="cri-o://75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17" gracePeriod=30 Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.523568 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-api" containerID="cri-o://32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616" gracePeriod=30 Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.887601 4821 generic.go:334] "Generic (PLEG): container finished" podID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerID="75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17" exitCode=143 Mar 09 19:09:52 crc kubenswrapper[4821]: I0309 19:09:52.887664 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2","Type":"ContainerDied","Data":"75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17"} Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.439713 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.522576 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-custom-prometheus-ca\") pod \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.522643 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-cert-memcached-mtls\") pod \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.522723 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnhwl\" (UniqueName: \"kubernetes.io/projected/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-kube-api-access-pnhwl\") pod \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.522769 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-config-data\") pod \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.522792 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-combined-ca-bundle\") pod \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.522851 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-logs\") pod \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\" (UID: \"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2\") " Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.523349 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-logs" (OuterVolumeSpecName: "logs") pod "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" (UID: "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.547615 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-kube-api-access-pnhwl" (OuterVolumeSpecName: "kube-api-access-pnhwl") pod "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" (UID: "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2"). InnerVolumeSpecName "kube-api-access-pnhwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.571434 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" path="/var/lib/kubelet/pods/bc0e5c25-e99b-42f0-95e2-737b34d083df/volumes" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.573490 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" (UID: "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.583040 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" (UID: "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.617975 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-config-data" (OuterVolumeSpecName: "config-data") pod "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" (UID: "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.625192 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnhwl\" (UniqueName: \"kubernetes.io/projected/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-kube-api-access-pnhwl\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.625224 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.625233 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.625241 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.625248 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.642524 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" (UID: "f9ffe20a-abd1-42a6-b924-21ad8b0d77c2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.719833 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.726266 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.726782 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s4tfg"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.801918 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.802126 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="7f465a4a-1555-4736-a32b-08bd0456ac89" containerName="watcher-decision-engine" containerID="cri-o://ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d" gracePeriod=30 Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.831208 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.831464 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" containerName="watcher-applier" containerID="cri-o://45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" gracePeriod=30 Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837014 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher0f81-account-delete-rq89g"] Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837306 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="extract-content" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837322 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="extract-content" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837344 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837351 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837360 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837366 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837385 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837390 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837401 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="registry-server" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837406 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="registry-server" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837415 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="extract-utilities" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837421 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="extract-utilities" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837440 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837447 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837455 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837461 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.837470 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837476 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837615 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837624 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dac56dc-1f53-44b2-b4ab-1d95102d6a03" containerName="registry-server" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837634 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837644 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0e5c25-e99b-42f0-95e2-737b34d083df" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837655 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837663 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-api" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.837673 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-kuttl-api-log" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.838150 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.848000 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0f81-account-delete-rq89g"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.922363 4821 generic.go:334] "Generic (PLEG): container finished" podID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" containerID="32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616" exitCode=0 Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.922662 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2","Type":"ContainerDied","Data":"32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616"} Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.922690 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f9ffe20a-abd1-42a6-b924-21ad8b0d77c2","Type":"ContainerDied","Data":"006622a4f8d4cb222ef48e982a0391477c462fb9da36de0f643cf7bf929a5c7c"} Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.922705 4821 scope.go:117] "RemoveContainer" containerID="32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.922860 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.930502 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpls7\" (UniqueName: \"kubernetes.io/projected/83188248-cb09-4336-8684-72af238bb6a7-kube-api-access-cpls7\") pod \"watcher0f81-account-delete-rq89g\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.930588 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83188248-cb09-4336-8684-72af238bb6a7-operator-scripts\") pod \"watcher0f81-account-delete-rq89g\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.960561 4821 scope.go:117] "RemoveContainer" containerID="75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.979739 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.993128 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.995946 4821 scope.go:117] "RemoveContainer" containerID="32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.996751 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616\": container with ID starting with 32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616 not found: ID does not exist" containerID="32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.996796 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616"} err="failed to get container status \"32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616\": rpc error: code = NotFound desc = could not find container \"32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616\": container with ID starting with 32b83f6926622f1f4e2c55d0218e8a5b9bd17831a21d978b820ceb52e5810616 not found: ID does not exist" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.996821 4821 scope.go:117] "RemoveContainer" containerID="75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17" Mar 09 19:09:53 crc kubenswrapper[4821]: E0309 19:09:53.998802 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17\": container with ID starting with 75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17 not found: ID does not exist" containerID="75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17" Mar 09 19:09:53 crc kubenswrapper[4821]: I0309 19:09:53.998829 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17"} err="failed to get container status \"75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17\": rpc error: code = NotFound desc = could not find container \"75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17\": container with ID starting with 75bb2679bb4f443ec4c998d1d752ab2f681b5b291755f852c4d2d25b3705fa17 not found: ID does not exist" Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.033243 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83188248-cb09-4336-8684-72af238bb6a7-operator-scripts\") pod \"watcher0f81-account-delete-rq89g\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.033381 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpls7\" (UniqueName: \"kubernetes.io/projected/83188248-cb09-4336-8684-72af238bb6a7-kube-api-access-cpls7\") pod \"watcher0f81-account-delete-rq89g\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.034057 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83188248-cb09-4336-8684-72af238bb6a7-operator-scripts\") pod \"watcher0f81-account-delete-rq89g\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.054936 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpls7\" (UniqueName: \"kubernetes.io/projected/83188248-cb09-4336-8684-72af238bb6a7-kube-api-access-cpls7\") pod \"watcher0f81-account-delete-rq89g\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.154563 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.699830 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0f81-account-delete-rq89g"] Mar 09 19:09:54 crc kubenswrapper[4821]: W0309 19:09:54.704527 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83188248_cb09_4336_8684_72af238bb6a7.slice/crio-de5569d0998ecc32b919f8ac6bfc5757962acc862e426e81d34a583e201107cd WatchSource:0}: Error finding container de5569d0998ecc32b919f8ac6bfc5757962acc862e426e81d34a583e201107cd: Status 404 returned error can't find the container with id de5569d0998ecc32b919f8ac6bfc5757962acc862e426e81d34a583e201107cd Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.933853 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" event={"ID":"83188248-cb09-4336-8684-72af238bb6a7","Type":"ContainerStarted","Data":"6cac118488a7757461c279c634127f01dd2aee82aec7c309ca9b60cc10f4679f"} Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.934230 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" event={"ID":"83188248-cb09-4336-8684-72af238bb6a7","Type":"ContainerStarted","Data":"de5569d0998ecc32b919f8ac6bfc5757962acc862e426e81d34a583e201107cd"} Mar 09 19:09:54 crc kubenswrapper[4821]: I0309 19:09:54.953927 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" podStartSLOduration=1.953910021 podStartE2EDuration="1.953910021s" podCreationTimestamp="2026-03-09 19:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 19:09:54.950622802 +0000 UTC m=+2732.111998668" watchObservedRunningTime="2026-03-09 19:09:54.953910021 +0000 UTC m=+2732.115285877" Mar 09 19:09:55 crc kubenswrapper[4821]: I0309 19:09:55.079863 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.1.15:9322/\": dial tcp 10.217.1.15:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 09 19:09:55 crc kubenswrapper[4821]: I0309 19:09:55.080288 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="5bfa8bd2-ec0d-4052-aca5-ddf91815e698" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.1.15:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 09 19:09:55 crc kubenswrapper[4821]: E0309 19:09:55.166834 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:09:55 crc kubenswrapper[4821]: E0309 19:09:55.169632 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:09:55 crc kubenswrapper[4821]: E0309 19:09:55.170863 4821 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Mar 09 19:09:55 crc kubenswrapper[4821]: E0309 19:09:55.170891 4821 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" containerName="watcher-applier" Mar 09 19:09:55 crc kubenswrapper[4821]: I0309 19:09:55.565137 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0be9d48-e1cb-4f54-9efc-4adf16d4c997" path="/var/lib/kubelet/pods/c0be9d48-e1cb-4f54-9efc-4adf16d4c997/volumes" Mar 09 19:09:55 crc kubenswrapper[4821]: I0309 19:09:55.565717 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ffe20a-abd1-42a6-b924-21ad8b0d77c2" path="/var/lib/kubelet/pods/f9ffe20a-abd1-42a6-b924-21ad8b0d77c2/volumes" Mar 09 19:09:55 crc kubenswrapper[4821]: I0309 19:09:55.945289 4821 generic.go:334] "Generic (PLEG): container finished" podID="83188248-cb09-4336-8684-72af238bb6a7" containerID="6cac118488a7757461c279c634127f01dd2aee82aec7c309ca9b60cc10f4679f" exitCode=0 Mar 09 19:09:55 crc kubenswrapper[4821]: I0309 19:09:55.945344 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" event={"ID":"83188248-cb09-4336-8684-72af238bb6a7","Type":"ContainerDied","Data":"6cac118488a7757461c279c634127f01dd2aee82aec7c309ca9b60cc10f4679f"} Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.523563 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.524028 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-central-agent" containerID="cri-o://fbd622702f2b37b12de23e2f340aec7c1afcc02e8cf0bdb1713cde5748cf62e2" gracePeriod=30 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.524130 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-notification-agent" containerID="cri-o://757e5951a70fef0ff5c616a05348440c175f08e3f0d5a7bdb9d3a9cdbb641a9e" gracePeriod=30 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.524137 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="proxy-httpd" containerID="cri-o://51209f9160e659f2f422d7286f76e40ed49b03e4883162fac1cf03be67727a7e" gracePeriod=30 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.524141 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="sg-core" containerID="cri-o://6928b22c08489b7464fd945951834e8c6aaf3395f8930bda78ed9ce9d9b91224" gracePeriod=30 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.538818 4821 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.957371 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerDied","Data":"51209f9160e659f2f422d7286f76e40ed49b03e4883162fac1cf03be67727a7e"} Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.958201 4821 generic.go:334] "Generic (PLEG): container finished" podID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerID="51209f9160e659f2f422d7286f76e40ed49b03e4883162fac1cf03be67727a7e" exitCode=0 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.958486 4821 generic.go:334] "Generic (PLEG): container finished" podID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerID="6928b22c08489b7464fd945951834e8c6aaf3395f8930bda78ed9ce9d9b91224" exitCode=2 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.958583 4821 generic.go:334] "Generic (PLEG): container finished" podID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerID="757e5951a70fef0ff5c616a05348440c175f08e3f0d5a7bdb9d3a9cdbb641a9e" exitCode=0 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.958665 4821 generic.go:334] "Generic (PLEG): container finished" podID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerID="fbd622702f2b37b12de23e2f340aec7c1afcc02e8cf0bdb1713cde5748cf62e2" exitCode=0 Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.958586 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerDied","Data":"6928b22c08489b7464fd945951834e8c6aaf3395f8930bda78ed9ce9d9b91224"} Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.959006 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerDied","Data":"757e5951a70fef0ff5c616a05348440c175f08e3f0d5a7bdb9d3a9cdbb641a9e"} Mar 09 19:09:56 crc kubenswrapper[4821]: I0309 19:09:56.959096 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerDied","Data":"fbd622702f2b37b12de23e2f340aec7c1afcc02e8cf0bdb1713cde5748cf62e2"} Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.266270 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.308550 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-combined-ca-bundle\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.308801 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjpq4\" (UniqueName: \"kubernetes.io/projected/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-kube-api-access-xjpq4\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.308911 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-config-data\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309035 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-run-httpd\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309123 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-log-httpd\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309240 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-sg-core-conf-yaml\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309322 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-ceilometer-tls-certs\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309456 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309593 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-scripts\") pod \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\" (UID: \"4c79e2d2-49cf-452a-8d6e-7cceaad4c479\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.309967 4821 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.310054 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.315862 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-kube-api-access-xjpq4" (OuterVolumeSpecName: "kube-api-access-xjpq4") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "kube-api-access-xjpq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.317710 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-scripts" (OuterVolumeSpecName: "scripts") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.341280 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.361310 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.367380 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.383488 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.410849 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83188248-cb09-4336-8684-72af238bb6a7-operator-scripts\") pod \"83188248-cb09-4336-8684-72af238bb6a7\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.410956 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpls7\" (UniqueName: \"kubernetes.io/projected/83188248-cb09-4336-8684-72af238bb6a7-kube-api-access-cpls7\") pod \"83188248-cb09-4336-8684-72af238bb6a7\" (UID: \"83188248-cb09-4336-8684-72af238bb6a7\") " Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411484 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-config-data" (OuterVolumeSpecName: "config-data") pod "4c79e2d2-49cf-452a-8d6e-7cceaad4c479" (UID: "4c79e2d2-49cf-452a-8d6e-7cceaad4c479"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411587 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411610 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjpq4\" (UniqueName: \"kubernetes.io/projected/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-kube-api-access-xjpq4\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411623 4821 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411638 4821 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411648 4821 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411659 4821 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.411613 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83188248-cb09-4336-8684-72af238bb6a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83188248-cb09-4336-8684-72af238bb6a7" (UID: "83188248-cb09-4336-8684-72af238bb6a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.414160 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83188248-cb09-4336-8684-72af238bb6a7-kube-api-access-cpls7" (OuterVolumeSpecName: "kube-api-access-cpls7") pod "83188248-cb09-4336-8684-72af238bb6a7" (UID: "83188248-cb09-4336-8684-72af238bb6a7"). InnerVolumeSpecName "kube-api-access-cpls7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.513459 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c79e2d2-49cf-452a-8d6e-7cceaad4c479-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.513494 4821 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83188248-cb09-4336-8684-72af238bb6a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.513511 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpls7\" (UniqueName: \"kubernetes.io/projected/83188248-cb09-4336-8684-72af238bb6a7-kube-api-access-cpls7\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.970693 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4c79e2d2-49cf-452a-8d6e-7cceaad4c479","Type":"ContainerDied","Data":"f87510f9e0c6c0f3509bbacd227e1a4734c4ca26c7e4d9a86eed43a963269198"} Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.971633 4821 scope.go:117] "RemoveContainer" containerID="51209f9160e659f2f422d7286f76e40ed49b03e4883162fac1cf03be67727a7e" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.972045 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.975730 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" event={"ID":"83188248-cb09-4336-8684-72af238bb6a7","Type":"ContainerDied","Data":"de5569d0998ecc32b919f8ac6bfc5757962acc862e426e81d34a583e201107cd"} Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.975843 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de5569d0998ecc32b919f8ac6bfc5757962acc862e426e81d34a583e201107cd" Mar 09 19:09:57 crc kubenswrapper[4821]: I0309 19:09:57.975804 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0f81-account-delete-rq89g" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.006069 4821 scope.go:117] "RemoveContainer" containerID="6928b22c08489b7464fd945951834e8c6aaf3395f8930bda78ed9ce9d9b91224" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.007366 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.015209 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.038029 4821 scope.go:117] "RemoveContainer" containerID="757e5951a70fef0ff5c616a05348440c175f08e3f0d5a7bdb9d3a9cdbb641a9e" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.060170 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.060958 4821 scope.go:117] "RemoveContainer" containerID="fbd622702f2b37b12de23e2f340aec7c1afcc02e8cf0bdb1713cde5748cf62e2" Mar 09 19:09:58 crc kubenswrapper[4821]: E0309 19:09:58.061066 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="sg-core" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.061198 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="sg-core" Mar 09 19:09:58 crc kubenswrapper[4821]: E0309 19:09:58.061281 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83188248-cb09-4336-8684-72af238bb6a7" containerName="mariadb-account-delete" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.061367 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="83188248-cb09-4336-8684-72af238bb6a7" containerName="mariadb-account-delete" Mar 09 19:09:58 crc kubenswrapper[4821]: E0309 19:09:58.061728 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-notification-agent" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.061804 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-notification-agent" Mar 09 19:09:58 crc kubenswrapper[4821]: E0309 19:09:58.061874 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="proxy-httpd" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.061939 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="proxy-httpd" Mar 09 19:09:58 crc kubenswrapper[4821]: E0309 19:09:58.061998 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-central-agent" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.062068 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-central-agent" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.062358 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="proxy-httpd" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.062453 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="sg-core" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.062516 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-central-agent" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.062578 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" containerName="ceilometer-notification-agent" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.062635 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="83188248-cb09-4336-8684-72af238bb6a7" containerName="mariadb-account-delete" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.065373 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.067874 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.068025 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.068130 4821 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.106011 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125775 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125825 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-scripts\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125848 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125872 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-config-data\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125890 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6926c17a-76e1-49b8-a9ff-079a205d3c6b-log-httpd\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125916 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.125971 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6926c17a-76e1-49b8-a9ff-079a205d3c6b-run-httpd\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.126048 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxqqx\" (UniqueName: \"kubernetes.io/projected/6926c17a-76e1-49b8-a9ff-079a205d3c6b-kube-api-access-xxqqx\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227557 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxqqx\" (UniqueName: \"kubernetes.io/projected/6926c17a-76e1-49b8-a9ff-079a205d3c6b-kube-api-access-xxqqx\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227701 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227748 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-scripts\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227784 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227831 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-config-data\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227865 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6926c17a-76e1-49b8-a9ff-079a205d3c6b-log-httpd\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227912 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.227954 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6926c17a-76e1-49b8-a9ff-079a205d3c6b-run-httpd\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.228797 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6926c17a-76e1-49b8-a9ff-079a205d3c6b-run-httpd\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.228829 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6926c17a-76e1-49b8-a9ff-079a205d3c6b-log-httpd\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.233665 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-scripts\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.234235 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.235525 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.236416 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-config-data\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.244035 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxqqx\" (UniqueName: \"kubernetes.io/projected/6926c17a-76e1-49b8-a9ff-079a205d3c6b-kube-api-access-xxqqx\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.244522 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6926c17a-76e1-49b8-a9ff-079a205d3c6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6926c17a-76e1-49b8-a9ff-079a205d3c6b\") " pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.383922 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.816962 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.866043 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-d6s7z"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.873906 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-d6s7z"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.888429 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher0f81-account-delete-rq89g"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.894378 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.900173 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher0f81-account-delete-rq89g"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.909666 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-0f81-account-create-update-rtxq5"] Mar 09 19:09:58 crc kubenswrapper[4821]: I0309 19:09:58.991957 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6926c17a-76e1-49b8-a9ff-079a205d3c6b","Type":"ContainerStarted","Data":"98436c1e3aa9b5a4528b2479970cd3d423aab22cb65a24cf027c9a9a7bbc80f8"} Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.571802 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4716ce55-8666-4ad9-866b-e2f3f88cd5e7" path="/var/lib/kubelet/pods/4716ce55-8666-4ad9-866b-e2f3f88cd5e7/volumes" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.572876 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c79e2d2-49cf-452a-8d6e-7cceaad4c479" path="/var/lib/kubelet/pods/4c79e2d2-49cf-452a-8d6e-7cceaad4c479/volumes" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.574934 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a3b77b-71a0-4f39-8356-f7caa43d72a4" path="/var/lib/kubelet/pods/55a3b77b-71a0-4f39-8356-f7caa43d72a4/volumes" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.575859 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83188248-cb09-4336-8684-72af238bb6a7" path="/var/lib/kubelet/pods/83188248-cb09-4336-8684-72af238bb6a7/volumes" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.605053 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.648924 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb5ms\" (UniqueName: \"kubernetes.io/projected/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-kube-api-access-zb5ms\") pod \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.648996 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-combined-ca-bundle\") pod \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.649062 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-logs\") pod \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.649147 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-cert-memcached-mtls\") pod \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.649222 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-config-data\") pod \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\" (UID: \"70c7e606-d70f-4bdd-8bc5-6456f4c0a253\") " Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.649543 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-logs" (OuterVolumeSpecName: "logs") pod "70c7e606-d70f-4bdd-8bc5-6456f4c0a253" (UID: "70c7e606-d70f-4bdd-8bc5-6456f4c0a253"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.649741 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.670524 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-kube-api-access-zb5ms" (OuterVolumeSpecName: "kube-api-access-zb5ms") pod "70c7e606-d70f-4bdd-8bc5-6456f4c0a253" (UID: "70c7e606-d70f-4bdd-8bc5-6456f4c0a253"). InnerVolumeSpecName "kube-api-access-zb5ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.691827 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70c7e606-d70f-4bdd-8bc5-6456f4c0a253" (UID: "70c7e606-d70f-4bdd-8bc5-6456f4c0a253"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.740491 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-config-data" (OuterVolumeSpecName: "config-data") pod "70c7e606-d70f-4bdd-8bc5-6456f4c0a253" (UID: "70c7e606-d70f-4bdd-8bc5-6456f4c0a253"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.752397 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.752429 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb5ms\" (UniqueName: \"kubernetes.io/projected/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-kube-api-access-zb5ms\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.752440 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.806443 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "70c7e606-d70f-4bdd-8bc5-6456f4c0a253" (UID: "70c7e606-d70f-4bdd-8bc5-6456f4c0a253"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:09:59 crc kubenswrapper[4821]: I0309 19:09:59.854309 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/70c7e606-d70f-4bdd-8bc5-6456f4c0a253-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.002298 4821 generic.go:334] "Generic (PLEG): container finished" podID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" exitCode=0 Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.002378 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"70c7e606-d70f-4bdd-8bc5-6456f4c0a253","Type":"ContainerDied","Data":"45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5"} Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.002406 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"70c7e606-d70f-4bdd-8bc5-6456f4c0a253","Type":"ContainerDied","Data":"a90750c84ff520e38078446fd13b3d1f557041f7636ca8eb82eaa2c4fdc206d0"} Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.002423 4821 scope.go:117] "RemoveContainer" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.002508 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.005447 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6926c17a-76e1-49b8-a9ff-079a205d3c6b","Type":"ContainerStarted","Data":"a8712438b428de55aac5efdc3e526e3b2db1704032438225bd89ff191a06b061"} Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.025512 4821 scope.go:117] "RemoveContainer" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" Mar 09 19:10:00 crc kubenswrapper[4821]: E0309 19:10:00.026175 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5\": container with ID starting with 45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5 not found: ID does not exist" containerID="45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.026214 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5"} err="failed to get container status \"45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5\": rpc error: code = NotFound desc = could not find container \"45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5\": container with ID starting with 45b1af2fa71ceaf2c7f7f6fa8ed653211464cb41116737228426a154ceb21ab5 not found: ID does not exist" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.038263 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.047918 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.151541 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551390-ngv5v"] Mar 09 19:10:00 crc kubenswrapper[4821]: E0309 19:10:00.151957 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" containerName="watcher-applier" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.151973 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" containerName="watcher-applier" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.152137 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" containerName="watcher-applier" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.152760 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.158576 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.158717 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.158870 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.161538 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551390-ngv5v"] Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.262750 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwkjm\" (UniqueName: \"kubernetes.io/projected/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd-kube-api-access-cwkjm\") pod \"auto-csr-approver-29551390-ngv5v\" (UID: \"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd\") " pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.364039 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwkjm\" (UniqueName: \"kubernetes.io/projected/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd-kube-api-access-cwkjm\") pod \"auto-csr-approver-29551390-ngv5v\" (UID: \"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd\") " pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.398956 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwkjm\" (UniqueName: \"kubernetes.io/projected/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd-kube-api-access-cwkjm\") pod \"auto-csr-approver-29551390-ngv5v\" (UID: \"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd\") " pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.469225 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.708360 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.793921 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-combined-ca-bundle\") pod \"7f465a4a-1555-4736-a32b-08bd0456ac89\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.794003 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-custom-prometheus-ca\") pod \"7f465a4a-1555-4736-a32b-08bd0456ac89\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.794036 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8q6\" (UniqueName: \"kubernetes.io/projected/7f465a4a-1555-4736-a32b-08bd0456ac89-kube-api-access-lm8q6\") pod \"7f465a4a-1555-4736-a32b-08bd0456ac89\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.794065 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-config-data\") pod \"7f465a4a-1555-4736-a32b-08bd0456ac89\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.794153 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f465a4a-1555-4736-a32b-08bd0456ac89-logs\") pod \"7f465a4a-1555-4736-a32b-08bd0456ac89\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.794188 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-cert-memcached-mtls\") pod \"7f465a4a-1555-4736-a32b-08bd0456ac89\" (UID: \"7f465a4a-1555-4736-a32b-08bd0456ac89\") " Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.795783 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f465a4a-1555-4736-a32b-08bd0456ac89-logs" (OuterVolumeSpecName: "logs") pod "7f465a4a-1555-4736-a32b-08bd0456ac89" (UID: "7f465a4a-1555-4736-a32b-08bd0456ac89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.806071 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f465a4a-1555-4736-a32b-08bd0456ac89-kube-api-access-lm8q6" (OuterVolumeSpecName: "kube-api-access-lm8q6") pod "7f465a4a-1555-4736-a32b-08bd0456ac89" (UID: "7f465a4a-1555-4736-a32b-08bd0456ac89"). InnerVolumeSpecName "kube-api-access-lm8q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.825931 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7f465a4a-1555-4736-a32b-08bd0456ac89" (UID: "7f465a4a-1555-4736-a32b-08bd0456ac89"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.862231 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-config-data" (OuterVolumeSpecName: "config-data") pod "7f465a4a-1555-4736-a32b-08bd0456ac89" (UID: "7f465a4a-1555-4736-a32b-08bd0456ac89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.874995 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f465a4a-1555-4736-a32b-08bd0456ac89" (UID: "7f465a4a-1555-4736-a32b-08bd0456ac89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.897131 4821 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.897167 4821 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.897180 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm8q6\" (UniqueName: \"kubernetes.io/projected/7f465a4a-1555-4736-a32b-08bd0456ac89-kube-api-access-lm8q6\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.897194 4821 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-config-data\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.897207 4821 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f465a4a-1555-4736-a32b-08bd0456ac89-logs\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.951457 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7f465a4a-1555-4736-a32b-08bd0456ac89" (UID: "7f465a4a-1555-4736-a32b-08bd0456ac89"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:10:00 crc kubenswrapper[4821]: I0309 19:10:00.999164 4821 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7f465a4a-1555-4736-a32b-08bd0456ac89-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.016554 4821 generic.go:334] "Generic (PLEG): container finished" podID="7f465a4a-1555-4736-a32b-08bd0456ac89" containerID="ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d" exitCode=0 Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.016612 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.016614 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7f465a4a-1555-4736-a32b-08bd0456ac89","Type":"ContainerDied","Data":"ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d"} Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.016758 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7f465a4a-1555-4736-a32b-08bd0456ac89","Type":"ContainerDied","Data":"4a2427c0c60a701c29e5c0927e0bb2013e05a8a12f185748898d2c75d4c6b025"} Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.016801 4821 scope.go:117] "RemoveContainer" containerID="ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d" Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.021588 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6926c17a-76e1-49b8-a9ff-079a205d3c6b","Type":"ContainerStarted","Data":"cdbea0147b1dfa7ca43c79fc6b662a19654c1c02470a1fb1f6023cf61d78c10f"} Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.026247 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551390-ngv5v"] Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.037267 4821 scope.go:117] "RemoveContainer" containerID="ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d" Mar 09 19:10:01 crc kubenswrapper[4821]: E0309 19:10:01.037809 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d\": container with ID starting with ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d not found: ID does not exist" containerID="ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d" Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.037842 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d"} err="failed to get container status \"ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d\": rpc error: code = NotFound desc = could not find container \"ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d\": container with ID starting with ff74a419b1b83d97f1109db3f4c2bfc1b92c861285733e18477bf594ebfc899d not found: ID does not exist" Mar 09 19:10:01 crc kubenswrapper[4821]: W0309 19:10:01.043311 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae07b7e5_6cfa_40bb_9c95_be56354dd2fd.slice/crio-a3836bb10ef19cce86ba3a50b22fc096d7c83a04971b4326b5abfb16b2051aca WatchSource:0}: Error finding container a3836bb10ef19cce86ba3a50b22fc096d7c83a04971b4326b5abfb16b2051aca: Status 404 returned error can't find the container with id a3836bb10ef19cce86ba3a50b22fc096d7c83a04971b4326b5abfb16b2051aca Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.044009 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.056797 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.552541 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:10:01 crc kubenswrapper[4821]: E0309 19:10:01.552976 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.564259 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c7e606-d70f-4bdd-8bc5-6456f4c0a253" path="/var/lib/kubelet/pods/70c7e606-d70f-4bdd-8bc5-6456f4c0a253/volumes" Mar 09 19:10:01 crc kubenswrapper[4821]: I0309 19:10:01.565781 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f465a4a-1555-4736-a32b-08bd0456ac89" path="/var/lib/kubelet/pods/7f465a4a-1555-4736-a32b-08bd0456ac89/volumes" Mar 09 19:10:02 crc kubenswrapper[4821]: I0309 19:10:02.033205 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" event={"ID":"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd","Type":"ContainerStarted","Data":"a3836bb10ef19cce86ba3a50b22fc096d7c83a04971b4326b5abfb16b2051aca"} Mar 09 19:10:02 crc kubenswrapper[4821]: I0309 19:10:02.036816 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6926c17a-76e1-49b8-a9ff-079a205d3c6b","Type":"ContainerStarted","Data":"8f522e487895a075a9df36e91f3721d5faa4c3ddad6b81b6b67af95efe901bfc"} Mar 09 19:10:04 crc kubenswrapper[4821]: I0309 19:10:04.095177 4821 generic.go:334] "Generic (PLEG): container finished" podID="ae07b7e5-6cfa-40bb-9c95-be56354dd2fd" containerID="c4eb77c8872c20b01869131fbaf4ef1dbc32e65ca53adb0593d22ed169c7f014" exitCode=0 Mar 09 19:10:04 crc kubenswrapper[4821]: I0309 19:10:04.095442 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" event={"ID":"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd","Type":"ContainerDied","Data":"c4eb77c8872c20b01869131fbaf4ef1dbc32e65ca53adb0593d22ed169c7f014"} Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.105697 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6926c17a-76e1-49b8-a9ff-079a205d3c6b","Type":"ContainerStarted","Data":"4a6b46e256e03f826c3843d54e74aaf4860104a9fa3b37fb66296750ba05ba20"} Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.106191 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.130179 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.7689923090000002 podStartE2EDuration="7.130161397s" podCreationTimestamp="2026-03-09 19:09:58 +0000 UTC" firstStartedPulling="2026-03-09 19:09:58.825063865 +0000 UTC m=+2735.986439721" lastFinishedPulling="2026-03-09 19:10:04.186232953 +0000 UTC m=+2741.347608809" observedRunningTime="2026-03-09 19:10:05.128801611 +0000 UTC m=+2742.290177487" watchObservedRunningTime="2026-03-09 19:10:05.130161397 +0000 UTC m=+2742.291537253" Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.450216 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.616090 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwkjm\" (UniqueName: \"kubernetes.io/projected/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd-kube-api-access-cwkjm\") pod \"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd\" (UID: \"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd\") " Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.623504 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd-kube-api-access-cwkjm" (OuterVolumeSpecName: "kube-api-access-cwkjm") pod "ae07b7e5-6cfa-40bb-9c95-be56354dd2fd" (UID: "ae07b7e5-6cfa-40bb-9c95-be56354dd2fd"). InnerVolumeSpecName "kube-api-access-cwkjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:10:05 crc kubenswrapper[4821]: I0309 19:10:05.717993 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwkjm\" (UniqueName: \"kubernetes.io/projected/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd-kube-api-access-cwkjm\") on node \"crc\" DevicePath \"\"" Mar 09 19:10:06 crc kubenswrapper[4821]: I0309 19:10:06.115567 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" Mar 09 19:10:06 crc kubenswrapper[4821]: I0309 19:10:06.115569 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551390-ngv5v" event={"ID":"ae07b7e5-6cfa-40bb-9c95-be56354dd2fd","Type":"ContainerDied","Data":"a3836bb10ef19cce86ba3a50b22fc096d7c83a04971b4326b5abfb16b2051aca"} Mar 09 19:10:06 crc kubenswrapper[4821]: I0309 19:10:06.115713 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3836bb10ef19cce86ba3a50b22fc096d7c83a04971b4326b5abfb16b2051aca" Mar 09 19:10:06 crc kubenswrapper[4821]: I0309 19:10:06.522356 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551384-5cqh2"] Mar 09 19:10:06 crc kubenswrapper[4821]: I0309 19:10:06.528633 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551384-5cqh2"] Mar 09 19:10:07 crc kubenswrapper[4821]: I0309 19:10:07.561283 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af908031-ae94-4542-a42f-45e4c17e69ae" path="/var/lib/kubelet/pods/af908031-ae94-4542-a42f-45e4c17e69ae/volumes" Mar 09 19:10:15 crc kubenswrapper[4821]: I0309 19:10:15.552171 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:10:15 crc kubenswrapper[4821]: E0309 19:10:15.552872 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.091030 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b86cm/must-gather-8fp4b"] Mar 09 19:10:26 crc kubenswrapper[4821]: E0309 19:10:26.092810 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f465a4a-1555-4736-a32b-08bd0456ac89" containerName="watcher-decision-engine" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.092836 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f465a4a-1555-4736-a32b-08bd0456ac89" containerName="watcher-decision-engine" Mar 09 19:10:26 crc kubenswrapper[4821]: E0309 19:10:26.092882 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae07b7e5-6cfa-40bb-9c95-be56354dd2fd" containerName="oc" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.092893 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae07b7e5-6cfa-40bb-9c95-be56354dd2fd" containerName="oc" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.093117 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f465a4a-1555-4736-a32b-08bd0456ac89" containerName="watcher-decision-engine" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.093153 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae07b7e5-6cfa-40bb-9c95-be56354dd2fd" containerName="oc" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.094504 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.105741 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b86cm"/"kube-root-ca.crt" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.106041 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b86cm"/"openshift-service-ca.crt" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.142374 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b86cm/must-gather-8fp4b"] Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.159557 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/480bea75-1d63-4af0-b2e2-b7bf9d804872-must-gather-output\") pod \"must-gather-8fp4b\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.159938 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pvl\" (UniqueName: \"kubernetes.io/projected/480bea75-1d63-4af0-b2e2-b7bf9d804872-kube-api-access-s6pvl\") pod \"must-gather-8fp4b\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.261370 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/480bea75-1d63-4af0-b2e2-b7bf9d804872-must-gather-output\") pod \"must-gather-8fp4b\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.261450 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pvl\" (UniqueName: \"kubernetes.io/projected/480bea75-1d63-4af0-b2e2-b7bf9d804872-kube-api-access-s6pvl\") pod \"must-gather-8fp4b\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.261852 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/480bea75-1d63-4af0-b2e2-b7bf9d804872-must-gather-output\") pod \"must-gather-8fp4b\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.299226 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pvl\" (UniqueName: \"kubernetes.io/projected/480bea75-1d63-4af0-b2e2-b7bf9d804872-kube-api-access-s6pvl\") pod \"must-gather-8fp4b\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.430630 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:10:26 crc kubenswrapper[4821]: I0309 19:10:26.930139 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b86cm/must-gather-8fp4b"] Mar 09 19:10:27 crc kubenswrapper[4821]: I0309 19:10:27.345927 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b86cm/must-gather-8fp4b" event={"ID":"480bea75-1d63-4af0-b2e2-b7bf9d804872","Type":"ContainerStarted","Data":"94d929ff75417ef1dfc2ec83d086ac7d64fa638d8e92d7ec9654e4e571f98931"} Mar 09 19:10:28 crc kubenswrapper[4821]: I0309 19:10:28.406063 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Mar 09 19:10:28 crc kubenswrapper[4821]: I0309 19:10:28.551553 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:10:28 crc kubenswrapper[4821]: E0309 19:10:28.551852 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:10:34 crc kubenswrapper[4821]: I0309 19:10:34.405306 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b86cm/must-gather-8fp4b" event={"ID":"480bea75-1d63-4af0-b2e2-b7bf9d804872","Type":"ContainerStarted","Data":"fcd9e23f2ed8e2559f8009cdff8249083af2f76248cd4cdd2664a937d264d1b2"} Mar 09 19:10:34 crc kubenswrapper[4821]: I0309 19:10:34.405917 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b86cm/must-gather-8fp4b" event={"ID":"480bea75-1d63-4af0-b2e2-b7bf9d804872","Type":"ContainerStarted","Data":"f9d733b77bf8acee79bc4b1a908dd4060a297f0f5542a6d73e7086d5af517ee0"} Mar 09 19:10:34 crc kubenswrapper[4821]: I0309 19:10:34.438848 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b86cm/must-gather-8fp4b" podStartSLOduration=2.133699465 podStartE2EDuration="8.438829559s" podCreationTimestamp="2026-03-09 19:10:26 +0000 UTC" firstStartedPulling="2026-03-09 19:10:26.939116616 +0000 UTC m=+2764.100492472" lastFinishedPulling="2026-03-09 19:10:33.24424671 +0000 UTC m=+2770.405622566" observedRunningTime="2026-03-09 19:10:34.432418014 +0000 UTC m=+2771.593793870" watchObservedRunningTime="2026-03-09 19:10:34.438829559 +0000 UTC m=+2771.600205435" Mar 09 19:10:41 crc kubenswrapper[4821]: I0309 19:10:41.552335 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:10:41 crc kubenswrapper[4821]: E0309 19:10:41.553055 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:10:50 crc kubenswrapper[4821]: I0309 19:10:50.150213 4821 scope.go:117] "RemoveContainer" containerID="7db01e802e33cdcaf4c936b120f2050c7ed7d60d7f55dc45397b7fa7fa489cd5" Mar 09 19:10:50 crc kubenswrapper[4821]: I0309 19:10:50.191523 4821 scope.go:117] "RemoveContainer" containerID="13483fcad99a15a4ea27ec93ba716664ff2729db9b3832c1f9a9c1870446aeab" Mar 09 19:10:50 crc kubenswrapper[4821]: I0309 19:10:50.213137 4821 scope.go:117] "RemoveContainer" containerID="89faf9c51503378749e26bf789fbd7cbd6104d2506003131485bf22c6520d940" Mar 09 19:10:50 crc kubenswrapper[4821]: I0309 19:10:50.250307 4821 scope.go:117] "RemoveContainer" containerID="ef18b3b46f5ab418be43c17ff7f8ec18ef47577005b474faa397d00ccdc63cb4" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.347432 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cp2mb"] Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.350781 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.357053 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp2mb"] Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.380896 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-catalog-content\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.381107 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-utilities\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.381155 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mgtx\" (UniqueName: \"kubernetes.io/projected/dd30dace-f837-4320-8446-8938a997ef65-kube-api-access-5mgtx\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.482270 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-catalog-content\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.482342 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-utilities\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.482384 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mgtx\" (UniqueName: \"kubernetes.io/projected/dd30dace-f837-4320-8446-8938a997ef65-kube-api-access-5mgtx\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.482813 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-catalog-content\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.482847 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-utilities\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.509618 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mgtx\" (UniqueName: \"kubernetes.io/projected/dd30dace-f837-4320-8446-8938a997ef65-kube-api-access-5mgtx\") pod \"redhat-marketplace-cp2mb\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:51 crc kubenswrapper[4821]: I0309 19:10:51.676154 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:10:52 crc kubenswrapper[4821]: I0309 19:10:52.139949 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp2mb"] Mar 09 19:10:52 crc kubenswrapper[4821]: W0309 19:10:52.142792 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd30dace_f837_4320_8446_8938a997ef65.slice/crio-bfc925d022209a9afa4ae3261a24625632e9e95db363fd8ace811741bfaaf27d WatchSource:0}: Error finding container bfc925d022209a9afa4ae3261a24625632e9e95db363fd8ace811741bfaaf27d: Status 404 returned error can't find the container with id bfc925d022209a9afa4ae3261a24625632e9e95db363fd8ace811741bfaaf27d Mar 09 19:10:52 crc kubenswrapper[4821]: I0309 19:10:52.551404 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:10:52 crc kubenswrapper[4821]: E0309 19:10:52.551997 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:10:52 crc kubenswrapper[4821]: I0309 19:10:52.555825 4821 generic.go:334] "Generic (PLEG): container finished" podID="dd30dace-f837-4320-8446-8938a997ef65" containerID="57d88698791233988f8e02766eb5f9596c35cf12fdbd9b8e105006d3565fefdb" exitCode=0 Mar 09 19:10:52 crc kubenswrapper[4821]: I0309 19:10:52.555889 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerDied","Data":"57d88698791233988f8e02766eb5f9596c35cf12fdbd9b8e105006d3565fefdb"} Mar 09 19:10:52 crc kubenswrapper[4821]: I0309 19:10:52.555957 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerStarted","Data":"bfc925d022209a9afa4ae3261a24625632e9e95db363fd8ace811741bfaaf27d"} Mar 09 19:10:53 crc kubenswrapper[4821]: I0309 19:10:53.569656 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerStarted","Data":"f67ad2e7e4a10719027911bf1c785aae5e1611ec778aae03916c38ae913f8ebf"} Mar 09 19:10:54 crc kubenswrapper[4821]: I0309 19:10:54.581131 4821 generic.go:334] "Generic (PLEG): container finished" podID="dd30dace-f837-4320-8446-8938a997ef65" containerID="f67ad2e7e4a10719027911bf1c785aae5e1611ec778aae03916c38ae913f8ebf" exitCode=0 Mar 09 19:10:54 crc kubenswrapper[4821]: I0309 19:10:54.581171 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerDied","Data":"f67ad2e7e4a10719027911bf1c785aae5e1611ec778aae03916c38ae913f8ebf"} Mar 09 19:10:54 crc kubenswrapper[4821]: I0309 19:10:54.581196 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerStarted","Data":"263561b26e520e0f6d95975676ebe2e66278daa73ede988ad06d65f78b1a7b06"} Mar 09 19:10:54 crc kubenswrapper[4821]: I0309 19:10:54.602519 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cp2mb" podStartSLOduration=2.149469845 podStartE2EDuration="3.602497617s" podCreationTimestamp="2026-03-09 19:10:51 +0000 UTC" firstStartedPulling="2026-03-09 19:10:52.557892029 +0000 UTC m=+2789.719267895" lastFinishedPulling="2026-03-09 19:10:54.010919811 +0000 UTC m=+2791.172295667" observedRunningTime="2026-03-09 19:10:54.594750438 +0000 UTC m=+2791.756126304" watchObservedRunningTime="2026-03-09 19:10:54.602497617 +0000 UTC m=+2791.763873473" Mar 09 19:11:01 crc kubenswrapper[4821]: I0309 19:11:01.677009 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:11:01 crc kubenswrapper[4821]: I0309 19:11:01.678422 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:11:01 crc kubenswrapper[4821]: I0309 19:11:01.746187 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:11:02 crc kubenswrapper[4821]: I0309 19:11:02.688536 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.338151 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp2mb"] Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.339260 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cp2mb" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="registry-server" containerID="cri-o://263561b26e520e0f6d95975676ebe2e66278daa73ede988ad06d65f78b1a7b06" gracePeriod=2 Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.551257 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:11:05 crc kubenswrapper[4821]: E0309 19:11:05.551554 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.675634 4821 generic.go:334] "Generic (PLEG): container finished" podID="dd30dace-f837-4320-8446-8938a997ef65" containerID="263561b26e520e0f6d95975676ebe2e66278daa73ede988ad06d65f78b1a7b06" exitCode=0 Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.675937 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerDied","Data":"263561b26e520e0f6d95975676ebe2e66278daa73ede988ad06d65f78b1a7b06"} Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.781835 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.817501 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mgtx\" (UniqueName: \"kubernetes.io/projected/dd30dace-f837-4320-8446-8938a997ef65-kube-api-access-5mgtx\") pod \"dd30dace-f837-4320-8446-8938a997ef65\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.817653 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-catalog-content\") pod \"dd30dace-f837-4320-8446-8938a997ef65\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.817737 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-utilities\") pod \"dd30dace-f837-4320-8446-8938a997ef65\" (UID: \"dd30dace-f837-4320-8446-8938a997ef65\") " Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.819190 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-utilities" (OuterVolumeSpecName: "utilities") pod "dd30dace-f837-4320-8446-8938a997ef65" (UID: "dd30dace-f837-4320-8446-8938a997ef65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.826507 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd30dace-f837-4320-8446-8938a997ef65-kube-api-access-5mgtx" (OuterVolumeSpecName: "kube-api-access-5mgtx") pod "dd30dace-f837-4320-8446-8938a997ef65" (UID: "dd30dace-f837-4320-8446-8938a997ef65"). InnerVolumeSpecName "kube-api-access-5mgtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.851844 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd30dace-f837-4320-8446-8938a997ef65" (UID: "dd30dace-f837-4320-8446-8938a997ef65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.918992 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mgtx\" (UniqueName: \"kubernetes.io/projected/dd30dace-f837-4320-8446-8938a997ef65-kube-api-access-5mgtx\") on node \"crc\" DevicePath \"\"" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.919038 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 19:11:05 crc kubenswrapper[4821]: I0309 19:11:05.919048 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd30dace-f837-4320-8446-8938a997ef65-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.687196 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp2mb" event={"ID":"dd30dace-f837-4320-8446-8938a997ef65","Type":"ContainerDied","Data":"bfc925d022209a9afa4ae3261a24625632e9e95db363fd8ace811741bfaaf27d"} Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.687585 4821 scope.go:117] "RemoveContainer" containerID="263561b26e520e0f6d95975676ebe2e66278daa73ede988ad06d65f78b1a7b06" Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.687230 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp2mb" Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.707795 4821 scope.go:117] "RemoveContainer" containerID="f67ad2e7e4a10719027911bf1c785aae5e1611ec778aae03916c38ae913f8ebf" Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.721129 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp2mb"] Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.733765 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp2mb"] Mar 09 19:11:06 crc kubenswrapper[4821]: I0309 19:11:06.745728 4821 scope.go:117] "RemoveContainer" containerID="57d88698791233988f8e02766eb5f9596c35cf12fdbd9b8e105006d3565fefdb" Mar 09 19:11:07 crc kubenswrapper[4821]: I0309 19:11:07.561650 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd30dace-f837-4320-8446-8938a997ef65" path="/var/lib/kubelet/pods/dd30dace-f837-4320-8446-8938a997ef65/volumes" Mar 09 19:11:16 crc kubenswrapper[4821]: I0309 19:11:16.552777 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:11:16 crc kubenswrapper[4821]: E0309 19:11:16.554942 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:11:30 crc kubenswrapper[4821]: I0309 19:11:30.552229 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:11:30 crc kubenswrapper[4821]: I0309 19:11:30.892199 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"9b39979df97b20549ac7c425f2bb268de75776162f0624b872b91574e85e8541"} Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.494804 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/util/0.log" Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.753298 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/pull/0.log" Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.770439 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/pull/0.log" Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.782445 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/util/0.log" Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.934273 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/pull/0.log" Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.951626 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/extract/0.log" Mar 09 19:11:45 crc kubenswrapper[4821]: I0309 19:11:45.964149 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5eea759a77a44b7d379d7a90e28614a746a8848e17a3c9b1bbf53168bfgr2wm_7022fc4e-6faf-4abb-9677-963728a8d91d/util/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.119467 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/util/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.294602 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/pull/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.333872 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/pull/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.355654 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/util/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.518139 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/pull/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.554365 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/extract/0.log" Mar 09 19:11:46 crc kubenswrapper[4821]: I0309 19:11:46.558395 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_76ea107b81a63add7970cc5182c9b5e3e7d7b3003be3f19b2a6bc21659g9hwx_175451d8-941f-4b65-a51c-60ec0d7427d1/util/0.log" Mar 09 19:11:47 crc kubenswrapper[4821]: I0309 19:11:47.041801 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-5rbnb_bb4823b7-c205-41c0-ba4d-d909ad9ff9cb/manager/0.log" Mar 09 19:11:47 crc kubenswrapper[4821]: I0309 19:11:47.291160 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-hjztj_7507717c-322f-43de-88ba-fc79b6a5a3f0/manager/0.log" Mar 09 19:11:47 crc kubenswrapper[4821]: I0309 19:11:47.544994 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-k4q8b_0b492a45-c917-4c00-abef-13abf40e71d1/manager/0.log" Mar 09 19:11:47 crc kubenswrapper[4821]: I0309 19:11:47.833657 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-vhljc_100889e4-2f00-4685-a5a7-6f9b73bb343f/manager/0.log" Mar 09 19:11:48 crc kubenswrapper[4821]: I0309 19:11:48.018049 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-cjvgb_0a1af309-4a43-4d58-8912-abc1ed1e626a/manager/0.log" Mar 09 19:11:48 crc kubenswrapper[4821]: I0309 19:11:48.279743 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-rldv2_c9d3c230-c74c-4cc4-af9f-f23fd5d9557c/manager/0.log" Mar 09 19:11:48 crc kubenswrapper[4821]: I0309 19:11:48.485442 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-wzvf8_772511ff-89ac-4190-8142-3bf3e4ef8423/manager/0.log" Mar 09 19:11:48 crc kubenswrapper[4821]: I0309 19:11:48.664513 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-pc4fs_71c62d87-8310-4ebd-8449-df18a56dc391/manager/0.log" Mar 09 19:11:48 crc kubenswrapper[4821]: I0309 19:11:48.804497 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-dhq9j_d878ceb7-5af9-4a91-82cb-ed03b73f1b1d/manager/0.log" Mar 09 19:11:49 crc kubenswrapper[4821]: I0309 19:11:49.163982 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-9jwph_b10f4933-a23d-4c0b-9834-40caa60b158c/manager/0.log" Mar 09 19:11:49 crc kubenswrapper[4821]: I0309 19:11:49.178898 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-974k8_89a79a12-ce90-47f7-b0c4-c0976d7a4b1f/manager/0.log" Mar 09 19:11:49 crc kubenswrapper[4821]: I0309 19:11:49.385422 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-qz976_d6f3f569-2d6b-4c06-a814-de946397de51/manager/0.log" Mar 09 19:11:49 crc kubenswrapper[4821]: I0309 19:11:49.416641 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-6864w_d1eba3e1-a741-4ca6-a97e-c42565f64d2b/manager/0.log" Mar 09 19:11:49 crc kubenswrapper[4821]: I0309 19:11:49.606442 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cp28hq_212b84ba-bcda-4820-8388-7d2ef286b7a1/manager/0.log" Mar 09 19:11:49 crc kubenswrapper[4821]: I0309 19:11:49.868618 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cjjbr_43193e76-c853-4bc6-89e4-12ff09c8fbcb/registry-server/0.log" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.113196 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-w9h2t_6bc651e4-1359-43b1-bc53-1a561195cf4a/manager/0.log" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.193288 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-wbjvw_d498150e-134b-4359-92c6-300b8fbe3b1a/manager/0.log" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.261660 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64797568c9-7qbhc_9162d85f-f6f9-4a12-8511-d11676a6398a/manager/0.log" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.393087 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xzdll_172ecee8-2a7b-4e13-b095-ca2a442932d2/operator/0.log" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.427137 4821 scope.go:117] "RemoveContainer" containerID="bbf41a98e9a221c04d5e5acc3d9145917b59d65cf1ce36493189851b51caee25" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.457284 4821 scope.go:117] "RemoveContainer" containerID="1d1b0bbbb348b5632bf9702642b54e7f54f97e8538f2bcc4e67c4f742df4092b" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.489442 4821 scope.go:117] "RemoveContainer" containerID="7c516a1e24ff07a5e59aaeb17ab65885c5da2a3e3e8a914e3953ec8141440a6b" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.527414 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-k9s84_528fcc81-e85c-4764-9413-3957ba8c6fd2/manager/0.log" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.536884 4821 scope.go:117] "RemoveContainer" containerID="3a87811a895d48f6a5afffc6c42efd61c1b8deb985bcfd5286f2da1252a941a9" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.572269 4821 scope.go:117] "RemoveContainer" containerID="3c4837eaf76e6a0333da9b6862498a1f1f48426a07ed7c1c19e08db7508665ab" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.632174 4821 scope.go:117] "RemoveContainer" containerID="f43d67e7257908b5a02f5391464da6fe1a5c097581d0630c883e2097135fbcb7" Mar 09 19:11:50 crc kubenswrapper[4821]: I0309 19:11:50.790541 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-z5d4k_2e967d7a-a1cf-44b9-ae66-62c4c5c81b55/manager/0.log" Mar 09 19:11:51 crc kubenswrapper[4821]: I0309 19:11:51.047900 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-zffr5_28a07a44-f359-40b3-a2d4-850cb3822cb4/manager/0.log" Mar 09 19:11:51 crc kubenswrapper[4821]: I0309 19:11:51.212420 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-snmkh_81e65025-6a00-4e95-83fb-ccf57455d09e/registry-server/0.log" Mar 09 19:11:51 crc kubenswrapper[4821]: I0309 19:11:51.386211 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-85b655bd8f-llgvv_3e660422-3d8e-4716-b1df-6aa0d193e8f6/manager/0.log" Mar 09 19:11:53 crc kubenswrapper[4821]: I0309 19:11:53.033559 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-9vg4l_9eb96ad1-a011-482f-bbdd-edfd673217b5/manager/0.log" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.141870 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551392-f2pbf"] Mar 09 19:12:00 crc kubenswrapper[4821]: E0309 19:12:00.142590 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="registry-server" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.142601 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="registry-server" Mar 09 19:12:00 crc kubenswrapper[4821]: E0309 19:12:00.142620 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="extract-utilities" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.142627 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="extract-utilities" Mar 09 19:12:00 crc kubenswrapper[4821]: E0309 19:12:00.142643 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="extract-content" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.142649 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="extract-content" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.142790 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd30dace-f837-4320-8446-8938a997ef65" containerName="registry-server" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.143280 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.146816 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.146851 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.146936 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.154755 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551392-f2pbf"] Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.189512 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qg2d\" (UniqueName: \"kubernetes.io/projected/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd-kube-api-access-4qg2d\") pod \"auto-csr-approver-29551392-f2pbf\" (UID: \"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd\") " pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.291354 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qg2d\" (UniqueName: \"kubernetes.io/projected/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd-kube-api-access-4qg2d\") pod \"auto-csr-approver-29551392-f2pbf\" (UID: \"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd\") " pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.309438 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qg2d\" (UniqueName: \"kubernetes.io/projected/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd-kube-api-access-4qg2d\") pod \"auto-csr-approver-29551392-f2pbf\" (UID: \"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd\") " pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.461340 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.905925 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551392-f2pbf"] Mar 09 19:12:00 crc kubenswrapper[4821]: I0309 19:12:00.918372 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 19:12:01 crc kubenswrapper[4821]: I0309 19:12:01.162160 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" event={"ID":"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd","Type":"ContainerStarted","Data":"58e1c93ca5bd6da64c52f74c9ca1984810cd3be8b853d216a21e7d04c2fc03de"} Mar 09 19:12:02 crc kubenswrapper[4821]: E0309 19:12:02.692708 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe2f8900_837d_4b97_81c8_1ebb0f5a49bd.slice/crio-14e0c11cd8b2a9b1311fc8576908b139c55fde6aa9ba421085594031ea290ce8.scope\": RecentStats: unable to find data in memory cache]" Mar 09 19:12:03 crc kubenswrapper[4821]: I0309 19:12:03.181634 4821 generic.go:334] "Generic (PLEG): container finished" podID="fe2f8900-837d-4b97-81c8-1ebb0f5a49bd" containerID="14e0c11cd8b2a9b1311fc8576908b139c55fde6aa9ba421085594031ea290ce8" exitCode=0 Mar 09 19:12:03 crc kubenswrapper[4821]: I0309 19:12:03.181683 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" event={"ID":"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd","Type":"ContainerDied","Data":"14e0c11cd8b2a9b1311fc8576908b139c55fde6aa9ba421085594031ea290ce8"} Mar 09 19:12:04 crc kubenswrapper[4821]: I0309 19:12:04.565596 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:04 crc kubenswrapper[4821]: I0309 19:12:04.664034 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qg2d\" (UniqueName: \"kubernetes.io/projected/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd-kube-api-access-4qg2d\") pod \"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd\" (UID: \"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd\") " Mar 09 19:12:04 crc kubenswrapper[4821]: I0309 19:12:04.670605 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd-kube-api-access-4qg2d" (OuterVolumeSpecName: "kube-api-access-4qg2d") pod "fe2f8900-837d-4b97-81c8-1ebb0f5a49bd" (UID: "fe2f8900-837d-4b97-81c8-1ebb0f5a49bd"). InnerVolumeSpecName "kube-api-access-4qg2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:12:04 crc kubenswrapper[4821]: I0309 19:12:04.766441 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qg2d\" (UniqueName: \"kubernetes.io/projected/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd-kube-api-access-4qg2d\") on node \"crc\" DevicePath \"\"" Mar 09 19:12:05 crc kubenswrapper[4821]: I0309 19:12:05.198657 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" event={"ID":"fe2f8900-837d-4b97-81c8-1ebb0f5a49bd","Type":"ContainerDied","Data":"58e1c93ca5bd6da64c52f74c9ca1984810cd3be8b853d216a21e7d04c2fc03de"} Mar 09 19:12:05 crc kubenswrapper[4821]: I0309 19:12:05.198697 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e1c93ca5bd6da64c52f74c9ca1984810cd3be8b853d216a21e7d04c2fc03de" Mar 09 19:12:05 crc kubenswrapper[4821]: I0309 19:12:05.198705 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551392-f2pbf" Mar 09 19:12:05 crc kubenswrapper[4821]: I0309 19:12:05.635941 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551386-fqtgc"] Mar 09 19:12:05 crc kubenswrapper[4821]: I0309 19:12:05.641925 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551386-fqtgc"] Mar 09 19:12:07 crc kubenswrapper[4821]: I0309 19:12:07.561539 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116854a1-ac31-4634-8373-53ce3889d5e0" path="/var/lib/kubelet/pods/116854a1-ac31-4634-8373-53ce3889d5e0/volumes" Mar 09 19:12:13 crc kubenswrapper[4821]: I0309 19:12:13.417728 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dbk5m_04e006fb-bb29-4683-b3a9-a17698564fa6/control-plane-machine-set-operator/0.log" Mar 09 19:12:13 crc kubenswrapper[4821]: I0309 19:12:13.598861 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h8j2t_a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4/kube-rbac-proxy/0.log" Mar 09 19:12:13 crc kubenswrapper[4821]: I0309 19:12:13.643414 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h8j2t_a6f5ce9b-e0f2-4bbb-9a18-7c62bdb830e4/machine-api-operator/0.log" Mar 09 19:12:28 crc kubenswrapper[4821]: I0309 19:12:28.216272 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-zmpfr_c9cd2b98-2171-4c11-abb5-a0e3db0a69d5/cert-manager-controller/0.log" Mar 09 19:12:28 crc kubenswrapper[4821]: I0309 19:12:28.432944 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-2t9gn_2840cece-7d09-420e-8c47-85417d8032a9/cert-manager-cainjector/0.log" Mar 09 19:12:28 crc kubenswrapper[4821]: I0309 19:12:28.466851 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-8k2s6_56627852-72af-4929-a17f-29e6675fdbfc/cert-manager-webhook/0.log" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.542988 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhnv5"] Mar 09 19:12:31 crc kubenswrapper[4821]: E0309 19:12:31.543843 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2f8900-837d-4b97-81c8-1ebb0f5a49bd" containerName="oc" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.543858 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2f8900-837d-4b97-81c8-1ebb0f5a49bd" containerName="oc" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.544068 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2f8900-837d-4b97-81c8-1ebb0f5a49bd" containerName="oc" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.546556 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.585146 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-catalog-content\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.585792 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-utilities\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.585858 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqtj9\" (UniqueName: \"kubernetes.io/projected/e72b532f-299e-43f4-8118-dc4e7d820c85-kube-api-access-mqtj9\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.587821 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhnv5"] Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.686956 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-utilities\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.687025 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqtj9\" (UniqueName: \"kubernetes.io/projected/e72b532f-299e-43f4-8118-dc4e7d820c85-kube-api-access-mqtj9\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.687116 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-catalog-content\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.687465 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-utilities\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.687544 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-catalog-content\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.711826 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqtj9\" (UniqueName: \"kubernetes.io/projected/e72b532f-299e-43f4-8118-dc4e7d820c85-kube-api-access-mqtj9\") pod \"certified-operators-bhnv5\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:31 crc kubenswrapper[4821]: I0309 19:12:31.869411 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:32 crc kubenswrapper[4821]: I0309 19:12:32.350621 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhnv5"] Mar 09 19:12:32 crc kubenswrapper[4821]: I0309 19:12:32.441418 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerStarted","Data":"e169a119e08eb9abdf92d93c8951a8a995f5e39a0f7d357047f75431251efe22"} Mar 09 19:12:33 crc kubenswrapper[4821]: I0309 19:12:33.449717 4821 generic.go:334] "Generic (PLEG): container finished" podID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerID="af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af" exitCode=0 Mar 09 19:12:33 crc kubenswrapper[4821]: I0309 19:12:33.449776 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerDied","Data":"af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af"} Mar 09 19:12:34 crc kubenswrapper[4821]: I0309 19:12:34.460129 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerStarted","Data":"880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d"} Mar 09 19:12:35 crc kubenswrapper[4821]: I0309 19:12:35.469345 4821 generic.go:334] "Generic (PLEG): container finished" podID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerID="880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d" exitCode=0 Mar 09 19:12:35 crc kubenswrapper[4821]: I0309 19:12:35.469416 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerDied","Data":"880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d"} Mar 09 19:12:36 crc kubenswrapper[4821]: I0309 19:12:36.482414 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerStarted","Data":"b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926"} Mar 09 19:12:36 crc kubenswrapper[4821]: I0309 19:12:36.511823 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhnv5" podStartSLOduration=3.0564152509999998 podStartE2EDuration="5.511806262s" podCreationTimestamp="2026-03-09 19:12:31 +0000 UTC" firstStartedPulling="2026-03-09 19:12:33.451736339 +0000 UTC m=+2890.613112195" lastFinishedPulling="2026-03-09 19:12:35.90712734 +0000 UTC m=+2893.068503206" observedRunningTime="2026-03-09 19:12:36.511650808 +0000 UTC m=+2893.673026714" watchObservedRunningTime="2026-03-09 19:12:36.511806262 +0000 UTC m=+2893.673182118" Mar 09 19:12:41 crc kubenswrapper[4821]: I0309 19:12:41.870041 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:41 crc kubenswrapper[4821]: I0309 19:12:41.870685 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:41 crc kubenswrapper[4821]: I0309 19:12:41.916217 4821 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:42 crc kubenswrapper[4821]: I0309 19:12:42.560443 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-sk2qd_be8695e5-622f-41f2-af2e-bd194fdefeb9/nmstate-console-plugin/0.log" Mar 09 19:12:42 crc kubenswrapper[4821]: I0309 19:12:42.579114 4821 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:42 crc kubenswrapper[4821]: I0309 19:12:42.744939 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-msftq_3a2bd74c-644c-4c41-9159-5c8eadc45763/nmstate-handler/0.log" Mar 09 19:12:42 crc kubenswrapper[4821]: I0309 19:12:42.801614 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-g2mff_f003e733-9aab-493c-ad84-3b6ec8bae6ee/kube-rbac-proxy/0.log" Mar 09 19:12:42 crc kubenswrapper[4821]: I0309 19:12:42.821558 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-g2mff_f003e733-9aab-493c-ad84-3b6ec8bae6ee/nmstate-metrics/0.log" Mar 09 19:12:43 crc kubenswrapper[4821]: I0309 19:12:43.011465 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-hnr59_5857c061-39ca-4cdf-a64f-b2c5e60c6a35/nmstate-webhook/0.log" Mar 09 19:12:43 crc kubenswrapper[4821]: I0309 19:12:43.013870 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-2j8gv_e720485c-7121-43fb-aa59-e383aad4c545/nmstate-operator/0.log" Mar 09 19:12:45 crc kubenswrapper[4821]: I0309 19:12:45.534287 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhnv5"] Mar 09 19:12:45 crc kubenswrapper[4821]: I0309 19:12:45.534872 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhnv5" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="registry-server" containerID="cri-o://b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926" gracePeriod=2 Mar 09 19:12:45 crc kubenswrapper[4821]: I0309 19:12:45.949252 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.076199 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-catalog-content\") pod \"e72b532f-299e-43f4-8118-dc4e7d820c85\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.076352 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqtj9\" (UniqueName: \"kubernetes.io/projected/e72b532f-299e-43f4-8118-dc4e7d820c85-kube-api-access-mqtj9\") pod \"e72b532f-299e-43f4-8118-dc4e7d820c85\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.076407 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-utilities\") pod \"e72b532f-299e-43f4-8118-dc4e7d820c85\" (UID: \"e72b532f-299e-43f4-8118-dc4e7d820c85\") " Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.077357 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-utilities" (OuterVolumeSpecName: "utilities") pod "e72b532f-299e-43f4-8118-dc4e7d820c85" (UID: "e72b532f-299e-43f4-8118-dc4e7d820c85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.084310 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e72b532f-299e-43f4-8118-dc4e7d820c85-kube-api-access-mqtj9" (OuterVolumeSpecName: "kube-api-access-mqtj9") pod "e72b532f-299e-43f4-8118-dc4e7d820c85" (UID: "e72b532f-299e-43f4-8118-dc4e7d820c85"). InnerVolumeSpecName "kube-api-access-mqtj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.129737 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e72b532f-299e-43f4-8118-dc4e7d820c85" (UID: "e72b532f-299e-43f4-8118-dc4e7d820c85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.178478 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqtj9\" (UniqueName: \"kubernetes.io/projected/e72b532f-299e-43f4-8118-dc4e7d820c85-kube-api-access-mqtj9\") on node \"crc\" DevicePath \"\"" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.178513 4821 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-utilities\") on node \"crc\" DevicePath \"\"" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.178523 4821 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e72b532f-299e-43f4-8118-dc4e7d820c85-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.566707 4821 generic.go:334] "Generic (PLEG): container finished" podID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerID="b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926" exitCode=0 Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.566761 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerDied","Data":"b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926"} Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.566796 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhnv5" event={"ID":"e72b532f-299e-43f4-8118-dc4e7d820c85","Type":"ContainerDied","Data":"e169a119e08eb9abdf92d93c8951a8a995f5e39a0f7d357047f75431251efe22"} Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.566821 4821 scope.go:117] "RemoveContainer" containerID="b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.568367 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhnv5" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.597174 4821 scope.go:117] "RemoveContainer" containerID="880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.615719 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhnv5"] Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.625508 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhnv5"] Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.636041 4821 scope.go:117] "RemoveContainer" containerID="af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.673420 4821 scope.go:117] "RemoveContainer" containerID="b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926" Mar 09 19:12:46 crc kubenswrapper[4821]: E0309 19:12:46.673957 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926\": container with ID starting with b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926 not found: ID does not exist" containerID="b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.673988 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926"} err="failed to get container status \"b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926\": rpc error: code = NotFound desc = could not find container \"b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926\": container with ID starting with b88c8c6367d9fc2ccefb7998dab3a63c67cb2e44f2049f5d17ff7af99947c926 not found: ID does not exist" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.674011 4821 scope.go:117] "RemoveContainer" containerID="880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d" Mar 09 19:12:46 crc kubenswrapper[4821]: E0309 19:12:46.674455 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d\": container with ID starting with 880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d not found: ID does not exist" containerID="880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.674502 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d"} err="failed to get container status \"880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d\": rpc error: code = NotFound desc = could not find container \"880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d\": container with ID starting with 880ee251b6fe3084b0502f2b2c8b5dd00627008b86ff2e1a8255ea2e6616796d not found: ID does not exist" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.674533 4821 scope.go:117] "RemoveContainer" containerID="af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af" Mar 09 19:12:46 crc kubenswrapper[4821]: E0309 19:12:46.674859 4821 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af\": container with ID starting with af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af not found: ID does not exist" containerID="af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af" Mar 09 19:12:46 crc kubenswrapper[4821]: I0309 19:12:46.674902 4821 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af"} err="failed to get container status \"af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af\": rpc error: code = NotFound desc = could not find container \"af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af\": container with ID starting with af394c156b919fba65b3dc9a5991eb251e4014277244a152459510a846b3c1af not found: ID does not exist" Mar 09 19:12:47 crc kubenswrapper[4821]: I0309 19:12:47.562894 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" path="/var/lib/kubelet/pods/e72b532f-299e-43f4-8118-dc4e7d820c85/volumes" Mar 09 19:12:50 crc kubenswrapper[4821]: I0309 19:12:50.853589 4821 scope.go:117] "RemoveContainer" containerID="d987bcd0f29f1484414fd65e0f38e297988068006e4c624b3cb83f7e9a171d86" Mar 09 19:12:50 crc kubenswrapper[4821]: I0309 19:12:50.904943 4821 scope.go:117] "RemoveContainer" containerID="1abbb20b71f729c7c4eec46791df4b61176122d3e5f7c58df1304e9632f170d8" Mar 09 19:12:50 crc kubenswrapper[4821]: I0309 19:12:50.952385 4821 scope.go:117] "RemoveContainer" containerID="2615dd429b64c05cd27c554829b69270aad5e21e5cd8e21293e1f3f49b91425a" Mar 09 19:12:51 crc kubenswrapper[4821]: I0309 19:12:51.020796 4821 scope.go:117] "RemoveContainer" containerID="8d262c90630cb426e4e4b2bcc086635e3722ff8b122600c4ddcc380828663561" Mar 09 19:12:51 crc kubenswrapper[4821]: I0309 19:12:51.044528 4821 scope.go:117] "RemoveContainer" containerID="4e180a85cf5de8d6be76eb227c929cd0a8d2a2ffca0f8964dd741d5e4076eabc" Mar 09 19:12:58 crc kubenswrapper[4821]: I0309 19:12:58.544391 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cqfq5_85b873a4-96da-407a-b4af-30ba3aa97519/prometheus-operator/0.log" Mar 09 19:12:58 crc kubenswrapper[4821]: I0309 19:12:58.703232 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-559887c586-44lfv_c294d09f-af0a-400e-90ea-1097080fb096/prometheus-operator-admission-webhook/0.log" Mar 09 19:12:58 crc kubenswrapper[4821]: I0309 19:12:58.764234 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8/prometheus-operator-admission-webhook/0.log" Mar 09 19:12:58 crc kubenswrapper[4821]: I0309 19:12:58.902408 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-j8xx6_52b04c6b-da35-4f2a-a5f2-06370a59da78/operator/0.log" Mar 09 19:12:58 crc kubenswrapper[4821]: I0309 19:12:58.926793 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-hkkkk_ce0d9e34-5f6c-4503-95a0-6a127c905bee/observability-ui-dashboards/0.log" Mar 09 19:12:59 crc kubenswrapper[4821]: I0309 19:12:59.103278 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p679h_94baa4ca-adf1-461f-a309-a1639aafd708/perses-operator/0.log" Mar 09 19:13:15 crc kubenswrapper[4821]: I0309 19:13:15.577666 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-jfl8k_b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b/kube-rbac-proxy/0.log" Mar 09 19:13:15 crc kubenswrapper[4821]: I0309 19:13:15.683666 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-jfl8k_b43e0c69-2c5a-4562-8ab9-d4a8d6e5404b/controller/0.log" Mar 09 19:13:15 crc kubenswrapper[4821]: I0309 19:13:15.826248 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-frr-files/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.044909 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-metrics/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.048887 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-reloader/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.092191 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-frr-files/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.092390 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-reloader/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.266579 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-frr-files/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.277608 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-reloader/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.325987 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-metrics/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.328848 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-metrics/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.511142 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-frr-files/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.566165 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-reloader/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.581772 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/cp-metrics/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.584556 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/controller/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.825941 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/frr-metrics/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.826533 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/kube-rbac-proxy-frr/0.log" Mar 09 19:13:16 crc kubenswrapper[4821]: I0309 19:13:16.908006 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/kube-rbac-proxy/0.log" Mar 09 19:13:17 crc kubenswrapper[4821]: I0309 19:13:17.046997 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/reloader/0.log" Mar 09 19:13:17 crc kubenswrapper[4821]: I0309 19:13:17.174600 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-7q47l_64621269-3b51-4cc2-89c8-0fd5ad067fd7/frr-k8s-webhook-server/0.log" Mar 09 19:13:17 crc kubenswrapper[4821]: I0309 19:13:17.366307 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-858bc4f469-wp8gj_ece940b4-1c75-4a27-af76-1d0987599334/manager/0.log" Mar 09 19:13:17 crc kubenswrapper[4821]: I0309 19:13:17.595051 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f89859c4b-c6xkg_57e177f6-8afa-42f4-ac0c-2b43f01cf06a/webhook-server/0.log" Mar 09 19:13:17 crc kubenswrapper[4821]: I0309 19:13:17.621336 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5sdkw_cd899ccb-4a21-4e1f-93a3-39451435e6f8/kube-rbac-proxy/0.log" Mar 09 19:13:18 crc kubenswrapper[4821]: I0309 19:13:18.091583 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-5sdkw_cd899ccb-4a21-4e1f-93a3-39451435e6f8/speaker/0.log" Mar 09 19:13:18 crc kubenswrapper[4821]: I0309 19:13:18.237434 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pxc5m_ea8f9c80-04cb-455e-a2fc-2ed5b028a79c/frr/0.log" Mar 09 19:13:30 crc kubenswrapper[4821]: I0309 19:13:30.038648 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-wcggh"] Mar 09 19:13:30 crc kubenswrapper[4821]: I0309 19:13:30.046980 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-wcggh"] Mar 09 19:13:31 crc kubenswrapper[4821]: I0309 19:13:31.561888 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac1295f-5189-4137-8365-42fb46ca2803" path="/var/lib/kubelet/pods/fac1295f-5189-4137-8365-42fb46ca2803/volumes" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.294124 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_fa689f50-deca-4456-946b-edd730385d48/init-config-reloader/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.520701 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_fa689f50-deca-4456-946b-edd730385d48/init-config-reloader/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.546824 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_fa689f50-deca-4456-946b-edd730385d48/alertmanager/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.608716 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_fa689f50-deca-4456-946b-edd730385d48/config-reloader/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.744410 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6926c17a-76e1-49b8-a9ff-079a205d3c6b/ceilometer-central-agent/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.748738 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6926c17a-76e1-49b8-a9ff-079a205d3c6b/ceilometer-notification-agent/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.798078 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6926c17a-76e1-49b8-a9ff-079a205d3c6b/proxy-httpd/0.log" Mar 09 19:13:43 crc kubenswrapper[4821]: I0309 19:13:43.966842 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_6926c17a-76e1-49b8-a9ff-079a205d3c6b/sg-core/0.log" Mar 09 19:13:44 crc kubenswrapper[4821]: I0309 19:13:44.027132 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-6d45c85556-w6k7b_eef0c4bd-2bde-490b-872a-eda5cac560eb/keystone-api/0.log" Mar 09 19:13:44 crc kubenswrapper[4821]: I0309 19:13:44.162767 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-cron-29551381-c5bp4_98d8cd55-a4bc-446d-a770-ed57e35aeccb/keystone-cron/0.log" Mar 09 19:13:44 crc kubenswrapper[4821]: I0309 19:13:44.440774 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_c66df9ab-03fb-42fa-b3ef-9f3064523682/kube-state-metrics/0.log" Mar 09 19:13:44 crc kubenswrapper[4821]: I0309 19:13:44.706819 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_e0cec899-aa83-4720-8f75-bc2fc5002a28/mysql-bootstrap/0.log" Mar 09 19:13:44 crc kubenswrapper[4821]: I0309 19:13:44.961220 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_e0cec899-aa83-4720-8f75-bc2fc5002a28/mysql-bootstrap/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.019176 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_e0cec899-aa83-4720-8f75-bc2fc5002a28/galera/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.235707 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_a388f45b-e428-4530-b5cf-71879e545f6e/openstackclient/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.457504 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f/init-config-reloader/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.621431 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f/init-config-reloader/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.660276 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f/prometheus/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.674927 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f/config-reloader/0.log" Mar 09 19:13:45 crc kubenswrapper[4821]: I0309 19:13:45.937299 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_b4cf48ce-38c9-4dd4-b712-311a92dd29b6/setup-container/0.log" Mar 09 19:13:46 crc kubenswrapper[4821]: I0309 19:13:46.033096 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_cd8107e4-08e5-4aea-aaf9-3a6421c9dc0f/thanos-sidecar/0.log" Mar 09 19:13:46 crc kubenswrapper[4821]: I0309 19:13:46.321789 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_b4cf48ce-38c9-4dd4-b712-311a92dd29b6/setup-container/0.log" Mar 09 19:13:46 crc kubenswrapper[4821]: I0309 19:13:46.370722 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_b4cf48ce-38c9-4dd4-b712-311a92dd29b6/rabbitmq/0.log" Mar 09 19:13:46 crc kubenswrapper[4821]: I0309 19:13:46.524711 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_ace06b27-8092-4676-9bae-4df7c1044b98/setup-container/0.log" Mar 09 19:13:46 crc kubenswrapper[4821]: I0309 19:13:46.819450 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_ace06b27-8092-4676-9bae-4df7c1044b98/setup-container/0.log" Mar 09 19:13:46 crc kubenswrapper[4821]: I0309 19:13:46.914068 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_ace06b27-8092-4676-9bae-4df7c1044b98/rabbitmq/0.log" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.168672 4821 scope.go:117] "RemoveContainer" containerID="36b96214bc8c4009f78f2d497eb2dc2c85e0e456b77a86e7700d8e1f03871af1" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.205620 4821 scope.go:117] "RemoveContainer" containerID="456f0ed015285e04c1ff22745fe7aaaa47e3c304dd4925e2d12dfb1eead04364" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.228501 4821 scope.go:117] "RemoveContainer" containerID="9deefe21c8b2daf93d4a916dc2ba636c9627a32db28741fe4aae640707043b60" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.258282 4821 scope.go:117] "RemoveContainer" containerID="d4056539974d5c9e0205415939b70cd305fa2b5d71776ca9cedac0d1650e5b2f" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.307262 4821 scope.go:117] "RemoveContainer" containerID="ce0b75c36ae2656c4b6035393b9d51fa35638e5e097c73e14d93b3f3dc81581d" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.333267 4821 scope.go:117] "RemoveContainer" containerID="68a4dbfad2b24d0a3afbcf0c2604cf90bb491673543e78df185b34e511252db6" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.370764 4821 scope.go:117] "RemoveContainer" containerID="0b30bfbff353206030a133a07eba1c4c2a0e3f7a1e3d1760e708e77247e9906b" Mar 09 19:13:51 crc kubenswrapper[4821]: I0309 19:13:51.386958 4821 scope.go:117] "RemoveContainer" containerID="e58da97db5f5517ac5c5fc5c87d375a3a6ac05301c51a187b1469735bab08300" Mar 09 19:13:54 crc kubenswrapper[4821]: I0309 19:13:54.800336 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_0277ce9b-9597-40cc-9339-51cf5dc9d98d/memcached/0.log" Mar 09 19:13:59 crc kubenswrapper[4821]: I0309 19:13:59.913707 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:13:59 crc kubenswrapper[4821]: I0309 19:13:59.914156 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.146766 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551394-4fbs7"] Mar 09 19:14:00 crc kubenswrapper[4821]: E0309 19:14:00.147197 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="extract-content" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.147218 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="extract-content" Mar 09 19:14:00 crc kubenswrapper[4821]: E0309 19:14:00.147234 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="extract-utilities" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.147242 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="extract-utilities" Mar 09 19:14:00 crc kubenswrapper[4821]: E0309 19:14:00.147259 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="registry-server" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.147266 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="registry-server" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.147457 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72b532f-299e-43f4-8118-dc4e7d820c85" containerName="registry-server" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.148121 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.153062 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.157683 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.159860 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.172160 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551394-4fbs7"] Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.254850 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2kg\" (UniqueName: \"kubernetes.io/projected/dfd10209-072d-4352-9d8b-72b620a1d174-kube-api-access-hl2kg\") pod \"auto-csr-approver-29551394-4fbs7\" (UID: \"dfd10209-072d-4352-9d8b-72b620a1d174\") " pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.356362 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl2kg\" (UniqueName: \"kubernetes.io/projected/dfd10209-072d-4352-9d8b-72b620a1d174-kube-api-access-hl2kg\") pod \"auto-csr-approver-29551394-4fbs7\" (UID: \"dfd10209-072d-4352-9d8b-72b620a1d174\") " pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.383762 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl2kg\" (UniqueName: \"kubernetes.io/projected/dfd10209-072d-4352-9d8b-72b620a1d174-kube-api-access-hl2kg\") pod \"auto-csr-approver-29551394-4fbs7\" (UID: \"dfd10209-072d-4352-9d8b-72b620a1d174\") " pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.470192 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:00 crc kubenswrapper[4821]: I0309 19:14:00.933427 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551394-4fbs7"] Mar 09 19:14:01 crc kubenswrapper[4821]: I0309 19:14:01.230443 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" event={"ID":"dfd10209-072d-4352-9d8b-72b620a1d174","Type":"ContainerStarted","Data":"a009484b373d61440157829ce969daf1cb112492fb178b76a8f57db3362b3dba"} Mar 09 19:14:03 crc kubenswrapper[4821]: I0309 19:14:03.256898 4821 generic.go:334] "Generic (PLEG): container finished" podID="dfd10209-072d-4352-9d8b-72b620a1d174" containerID="bb305d9ce3bb0e320a85f4121d84962cf18bcaf7fc5ae9285205755a47d039bb" exitCode=0 Mar 09 19:14:03 crc kubenswrapper[4821]: I0309 19:14:03.256998 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" event={"ID":"dfd10209-072d-4352-9d8b-72b620a1d174","Type":"ContainerDied","Data":"bb305d9ce3bb0e320a85f4121d84962cf18bcaf7fc5ae9285205755a47d039bb"} Mar 09 19:14:04 crc kubenswrapper[4821]: I0309 19:14:04.556464 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:04 crc kubenswrapper[4821]: I0309 19:14:04.625389 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl2kg\" (UniqueName: \"kubernetes.io/projected/dfd10209-072d-4352-9d8b-72b620a1d174-kube-api-access-hl2kg\") pod \"dfd10209-072d-4352-9d8b-72b620a1d174\" (UID: \"dfd10209-072d-4352-9d8b-72b620a1d174\") " Mar 09 19:14:04 crc kubenswrapper[4821]: I0309 19:14:04.630706 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd10209-072d-4352-9d8b-72b620a1d174-kube-api-access-hl2kg" (OuterVolumeSpecName: "kube-api-access-hl2kg") pod "dfd10209-072d-4352-9d8b-72b620a1d174" (UID: "dfd10209-072d-4352-9d8b-72b620a1d174"). InnerVolumeSpecName "kube-api-access-hl2kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:14:04 crc kubenswrapper[4821]: I0309 19:14:04.729016 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl2kg\" (UniqueName: \"kubernetes.io/projected/dfd10209-072d-4352-9d8b-72b620a1d174-kube-api-access-hl2kg\") on node \"crc\" DevicePath \"\"" Mar 09 19:14:05 crc kubenswrapper[4821]: I0309 19:14:05.272651 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" event={"ID":"dfd10209-072d-4352-9d8b-72b620a1d174","Type":"ContainerDied","Data":"a009484b373d61440157829ce969daf1cb112492fb178b76a8f57db3362b3dba"} Mar 09 19:14:05 crc kubenswrapper[4821]: I0309 19:14:05.272691 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a009484b373d61440157829ce969daf1cb112492fb178b76a8f57db3362b3dba" Mar 09 19:14:05 crc kubenswrapper[4821]: I0309 19:14:05.272746 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551394-4fbs7" Mar 09 19:14:05 crc kubenswrapper[4821]: E0309 19:14:05.343564 4821 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfd10209_072d_4352_9d8b_72b620a1d174.slice/crio-a009484b373d61440157829ce969daf1cb112492fb178b76a8f57db3362b3dba\": RecentStats: unable to find data in memory cache]" Mar 09 19:14:05 crc kubenswrapper[4821]: I0309 19:14:05.633482 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551388-xfwcw"] Mar 09 19:14:05 crc kubenswrapper[4821]: I0309 19:14:05.641855 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551388-xfwcw"] Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.236356 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/util/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.527820 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/pull/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.531644 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/pull/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.567085 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/util/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.744345 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/pull/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.759141 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/util/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.820603 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82d7bqq_6dbcce1b-4861-49b4-aed4-aaa992fe1a79/extract/0.log" Mar 09 19:14:06 crc kubenswrapper[4821]: I0309 19:14:06.931972 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/util/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.171627 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/util/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.194543 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/pull/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.258904 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/pull/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.416070 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/util/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.420846 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/extract/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.424353 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5rfnx4_332c9a2e-4daa-4bc4-8020-1938abeccb55/pull/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.567027 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adefedb-bb07-4049-98c1-0e2eb6165f92" path="/var/lib/kubelet/pods/9adefedb-bb07-4049-98c1-0e2eb6165f92/volumes" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.618672 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/util/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.787177 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/util/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.809899 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/pull/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.843000 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/pull/0.log" Mar 09 19:14:07 crc kubenswrapper[4821]: I0309 19:14:07.988235 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/util/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.000150 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/extract/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.023564 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087dlv6_9a7665a2-307a-4f7f-939a-b93afc455415/pull/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.143588 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/extract-utilities/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.314635 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/extract-utilities/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.331801 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/extract-content/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.349969 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/extract-content/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.519720 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/extract-content/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.522104 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/extract-utilities/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.698791 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-vhn27_df2155e5-7524-47f7-8c00-80c2ab292588/registry-server/0.log" Mar 09 19:14:08 crc kubenswrapper[4821]: I0309 19:14:08.756811 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/extract-utilities/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.099849 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/extract-content/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.157848 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/extract-content/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.161960 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/extract-utilities/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.352556 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/extract-utilities/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.390827 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/extract-content/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.673002 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/util/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.929990 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-29tmk_011ab61a-9a65-4112-8ab5-149d78479cc4/registry-server/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.942032 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/util/0.log" Mar 09 19:14:09 crc kubenswrapper[4821]: I0309 19:14:09.962686 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/pull/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.082997 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/pull/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.293091 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/pull/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.302110 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/util/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.321747 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4b575j_7112cff8-f71e-4537-853f-155cfd48f5b6/extract/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.528208 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-556c4_872fb4be-c421-4274-8646-56e708f8c698/marketplace-operator/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.536305 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/extract-utilities/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.680112 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/extract-utilities/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.684152 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/extract-content/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.724915 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/extract-content/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.920634 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/extract-utilities/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.922423 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/extract-content/0.log" Mar 09 19:14:10 crc kubenswrapper[4821]: I0309 19:14:10.937733 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/extract-utilities/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.022045 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pchlh_6a1328a9-ebc5-4976-8ed0-45de86204b20/registry-server/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.154477 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/extract-content/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.157849 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/extract-utilities/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.166596 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/extract-content/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.319213 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/extract-utilities/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.336293 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/extract-content/0.log" Mar 09 19:14:11 crc kubenswrapper[4821]: I0309 19:14:11.905858 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tc2f5_f077b409-1e21-4fb0-a973-8c57822d2b94/registry-server/0.log" Mar 09 19:14:26 crc kubenswrapper[4821]: I0309 19:14:26.534910 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-559887c586-xfmsd_80d114e5-b1d1-496c-a0c1-3eeb8d2f67c8/prometheus-operator-admission-webhook/0.log" Mar 09 19:14:26 crc kubenswrapper[4821]: I0309 19:14:26.558062 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cqfq5_85b873a4-96da-407a-b4af-30ba3aa97519/prometheus-operator/0.log" Mar 09 19:14:26 crc kubenswrapper[4821]: I0309 19:14:26.610149 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-559887c586-44lfv_c294d09f-af0a-400e-90ea-1097080fb096/prometheus-operator-admission-webhook/0.log" Mar 09 19:14:26 crc kubenswrapper[4821]: I0309 19:14:26.764594 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-hkkkk_ce0d9e34-5f6c-4503-95a0-6a127c905bee/observability-ui-dashboards/0.log" Mar 09 19:14:26 crc kubenswrapper[4821]: I0309 19:14:26.846531 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-j8xx6_52b04c6b-da35-4f2a-a5f2-06370a59da78/operator/0.log" Mar 09 19:14:26 crc kubenswrapper[4821]: I0309 19:14:26.863783 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p679h_94baa4ca-adf1-461f-a309-a1639aafd708/perses-operator/0.log" Mar 09 19:14:29 crc kubenswrapper[4821]: I0309 19:14:29.913771 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:14:29 crc kubenswrapper[4821]: I0309 19:14:29.914108 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:14:51 crc kubenswrapper[4821]: I0309 19:14:51.512266 4821 scope.go:117] "RemoveContainer" containerID="9c60eeb0f071ed90da162e6b986e295458fc758379a0eb53ba06953b189b837a" Mar 09 19:14:51 crc kubenswrapper[4821]: I0309 19:14:51.548757 4821 scope.go:117] "RemoveContainer" containerID="bf321c9005f78e8b84c70deecd1c94e77d1f997e3832cb12e8143ee1f637a0d6" Mar 09 19:14:51 crc kubenswrapper[4821]: I0309 19:14:51.574542 4821 scope.go:117] "RemoveContainer" containerID="2b89e734430035b445c70ff135c9d41caff1317d2d9fb07bc3217b2e0d65a793" Mar 09 19:14:51 crc kubenswrapper[4821]: I0309 19:14:51.628618 4821 scope.go:117] "RemoveContainer" containerID="1ba729b63d04752442c6cc1d51e58a9e545957be3004b12637bd0680a024b545" Mar 09 19:14:51 crc kubenswrapper[4821]: I0309 19:14:51.650248 4821 scope.go:117] "RemoveContainer" containerID="ec44a816894a8b59a9c31982e0022953d74f50ae9c8a1fff04559a3fe0e4a4e0" Mar 09 19:14:51 crc kubenswrapper[4821]: I0309 19:14:51.706369 4821 scope.go:117] "RemoveContainer" containerID="83b9f0c14ed16b9c35b651e6bbf38557b8ceb271448b546f3650d6ab9e5d3aab" Mar 09 19:14:59 crc kubenswrapper[4821]: I0309 19:14:59.914296 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:14:59 crc kubenswrapper[4821]: I0309 19:14:59.914930 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:14:59 crc kubenswrapper[4821]: I0309 19:14:59.914996 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 19:14:59 crc kubenswrapper[4821]: I0309 19:14:59.915961 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b39979df97b20549ac7c425f2bb268de75776162f0624b872b91574e85e8541"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 19:14:59 crc kubenswrapper[4821]: I0309 19:14:59.916072 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://9b39979df97b20549ac7c425f2bb268de75776162f0624b872b91574e85e8541" gracePeriod=600 Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.175312 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb"] Mar 09 19:15:00 crc kubenswrapper[4821]: E0309 19:15:00.178594 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd10209-072d-4352-9d8b-72b620a1d174" containerName="oc" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.178641 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd10209-072d-4352-9d8b-72b620a1d174" containerName="oc" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.178938 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd10209-072d-4352-9d8b-72b620a1d174" containerName="oc" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.179851 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.183159 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.183491 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.187464 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb"] Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.240128 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-config-volume\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.240191 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2c5\" (UniqueName: \"kubernetes.io/projected/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-kube-api-access-cm2c5\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.240368 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-secret-volume\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.342485 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-config-volume\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.342559 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2c5\" (UniqueName: \"kubernetes.io/projected/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-kube-api-access-cm2c5\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.342601 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-secret-volume\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.343952 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-config-volume\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.354945 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-secret-volume\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.359093 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2c5\" (UniqueName: \"kubernetes.io/projected/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-kube-api-access-cm2c5\") pod \"collect-profiles-29551395-rr5fb\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.498163 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.804018 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="9b39979df97b20549ac7c425f2bb268de75776162f0624b872b91574e85e8541" exitCode=0 Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.804092 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"9b39979df97b20549ac7c425f2bb268de75776162f0624b872b91574e85e8541"} Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.804353 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerStarted","Data":"884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa"} Mar 09 19:15:00 crc kubenswrapper[4821]: I0309 19:15:00.804374 4821 scope.go:117] "RemoveContainer" containerID="2e97656a0c5c8ccdf249fb34ee3c3f4cd8501de8fced67c6294f9be90f6444d8" Mar 09 19:15:01 crc kubenswrapper[4821]: W0309 19:15:01.079564 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc328dcf_ed65_4f2c_94ba_f4b8f5da59a3.slice/crio-db4f729d69c7bfbcb6c4af911201b1554a6f5364a79b6f2090c9b327583ac176 WatchSource:0}: Error finding container db4f729d69c7bfbcb6c4af911201b1554a6f5364a79b6f2090c9b327583ac176: Status 404 returned error can't find the container with id db4f729d69c7bfbcb6c4af911201b1554a6f5364a79b6f2090c9b327583ac176 Mar 09 19:15:01 crc kubenswrapper[4821]: I0309 19:15:01.079670 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb"] Mar 09 19:15:01 crc kubenswrapper[4821]: I0309 19:15:01.819907 4821 generic.go:334] "Generic (PLEG): container finished" podID="fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" containerID="00591cb4f0591350539aa648579b8b8db3307e13fb9d0dbd1601a3ffc5f60cde" exitCode=0 Mar 09 19:15:01 crc kubenswrapper[4821]: I0309 19:15:01.820534 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" event={"ID":"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3","Type":"ContainerDied","Data":"00591cb4f0591350539aa648579b8b8db3307e13fb9d0dbd1601a3ffc5f60cde"} Mar 09 19:15:01 crc kubenswrapper[4821]: I0309 19:15:01.821176 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" event={"ID":"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3","Type":"ContainerStarted","Data":"db4f729d69c7bfbcb6c4af911201b1554a6f5364a79b6f2090c9b327583ac176"} Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.196720 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.302051 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2c5\" (UniqueName: \"kubernetes.io/projected/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-kube-api-access-cm2c5\") pod \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.302131 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-config-volume\") pod \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.302260 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-secret-volume\") pod \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\" (UID: \"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3\") " Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.303576 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" (UID: "fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.308153 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" (UID: "fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.323579 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-kube-api-access-cm2c5" (OuterVolumeSpecName: "kube-api-access-cm2c5") pod "fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" (UID: "fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3"). InnerVolumeSpecName "kube-api-access-cm2c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.403826 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2c5\" (UniqueName: \"kubernetes.io/projected/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-kube-api-access-cm2c5\") on node \"crc\" DevicePath \"\"" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.403895 4821 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-config-volume\") on node \"crc\" DevicePath \"\"" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.403910 4821 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.838859 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" event={"ID":"fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3","Type":"ContainerDied","Data":"db4f729d69c7bfbcb6c4af911201b1554a6f5364a79b6f2090c9b327583ac176"} Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.838907 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4f729d69c7bfbcb6c4af911201b1554a6f5364a79b6f2090c9b327583ac176" Mar 09 19:15:03 crc kubenswrapper[4821]: I0309 19:15:03.838968 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29551395-rr5fb" Mar 09 19:15:04 crc kubenswrapper[4821]: I0309 19:15:04.309367 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7"] Mar 09 19:15:04 crc kubenswrapper[4821]: I0309 19:15:04.312564 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29551350-l7kv7"] Mar 09 19:15:05 crc kubenswrapper[4821]: I0309 19:15:05.595206 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2d4a49-67a2-4a60-98ac-a10446691d92" path="/var/lib/kubelet/pods/9b2d4a49-67a2-4a60-98ac-a10446691d92/volumes" Mar 09 19:15:37 crc kubenswrapper[4821]: I0309 19:15:37.144405 4821 generic.go:334] "Generic (PLEG): container finished" podID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerID="f9d733b77bf8acee79bc4b1a908dd4060a297f0f5542a6d73e7086d5af517ee0" exitCode=0 Mar 09 19:15:37 crc kubenswrapper[4821]: I0309 19:15:37.144488 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b86cm/must-gather-8fp4b" event={"ID":"480bea75-1d63-4af0-b2e2-b7bf9d804872","Type":"ContainerDied","Data":"f9d733b77bf8acee79bc4b1a908dd4060a297f0f5542a6d73e7086d5af517ee0"} Mar 09 19:15:37 crc kubenswrapper[4821]: I0309 19:15:37.145766 4821 scope.go:117] "RemoveContainer" containerID="f9d733b77bf8acee79bc4b1a908dd4060a297f0f5542a6d73e7086d5af517ee0" Mar 09 19:15:37 crc kubenswrapper[4821]: I0309 19:15:37.425237 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b86cm_must-gather-8fp4b_480bea75-1d63-4af0-b2e2-b7bf9d804872/gather/0.log" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.089828 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b86cm/must-gather-8fp4b"] Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.090505 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-b86cm/must-gather-8fp4b" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="copy" containerID="cri-o://fcd9e23f2ed8e2559f8009cdff8249083af2f76248cd4cdd2664a937d264d1b2" gracePeriod=2 Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.099089 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b86cm/must-gather-8fp4b"] Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.226768 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b86cm_must-gather-8fp4b_480bea75-1d63-4af0-b2e2-b7bf9d804872/copy/0.log" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.227355 4821 generic.go:334] "Generic (PLEG): container finished" podID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerID="fcd9e23f2ed8e2559f8009cdff8249083af2f76248cd4cdd2664a937d264d1b2" exitCode=143 Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.669880 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b86cm_must-gather-8fp4b_480bea75-1d63-4af0-b2e2-b7bf9d804872/copy/0.log" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.675641 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.742143 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/480bea75-1d63-4af0-b2e2-b7bf9d804872-must-gather-output\") pod \"480bea75-1d63-4af0-b2e2-b7bf9d804872\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.742250 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6pvl\" (UniqueName: \"kubernetes.io/projected/480bea75-1d63-4af0-b2e2-b7bf9d804872-kube-api-access-s6pvl\") pod \"480bea75-1d63-4af0-b2e2-b7bf9d804872\" (UID: \"480bea75-1d63-4af0-b2e2-b7bf9d804872\") " Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.748720 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480bea75-1d63-4af0-b2e2-b7bf9d804872-kube-api-access-s6pvl" (OuterVolumeSpecName: "kube-api-access-s6pvl") pod "480bea75-1d63-4af0-b2e2-b7bf9d804872" (UID: "480bea75-1d63-4af0-b2e2-b7bf9d804872"). InnerVolumeSpecName "kube-api-access-s6pvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.843917 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6pvl\" (UniqueName: \"kubernetes.io/projected/480bea75-1d63-4af0-b2e2-b7bf9d804872-kube-api-access-s6pvl\") on node \"crc\" DevicePath \"\"" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.860099 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/480bea75-1d63-4af0-b2e2-b7bf9d804872-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "480bea75-1d63-4af0-b2e2-b7bf9d804872" (UID: "480bea75-1d63-4af0-b2e2-b7bf9d804872"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 19:15:46 crc kubenswrapper[4821]: I0309 19:15:46.945853 4821 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/480bea75-1d63-4af0-b2e2-b7bf9d804872-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 09 19:15:47 crc kubenswrapper[4821]: I0309 19:15:47.236181 4821 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b86cm_must-gather-8fp4b_480bea75-1d63-4af0-b2e2-b7bf9d804872/copy/0.log" Mar 09 19:15:47 crc kubenswrapper[4821]: I0309 19:15:47.236500 4821 scope.go:117] "RemoveContainer" containerID="fcd9e23f2ed8e2559f8009cdff8249083af2f76248cd4cdd2664a937d264d1b2" Mar 09 19:15:47 crc kubenswrapper[4821]: I0309 19:15:47.236610 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b86cm/must-gather-8fp4b" Mar 09 19:15:47 crc kubenswrapper[4821]: I0309 19:15:47.271466 4821 scope.go:117] "RemoveContainer" containerID="f9d733b77bf8acee79bc4b1a908dd4060a297f0f5542a6d73e7086d5af517ee0" Mar 09 19:15:47 crc kubenswrapper[4821]: I0309 19:15:47.563182 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" path="/var/lib/kubelet/pods/480bea75-1d63-4af0-b2e2-b7bf9d804872/volumes" Mar 09 19:15:51 crc kubenswrapper[4821]: I0309 19:15:51.812658 4821 scope.go:117] "RemoveContainer" containerID="1506c69808faa18de9959794c4113dfd395ca295870c5e0012b7c89297d8dca6" Mar 09 19:15:51 crc kubenswrapper[4821]: I0309 19:15:51.839415 4821 scope.go:117] "RemoveContainer" containerID="974366854c8821bd17d233956e156092a187419448d3a66b88f2c7191a3baac3" Mar 09 19:15:51 crc kubenswrapper[4821]: I0309 19:15:51.885708 4821 scope.go:117] "RemoveContainer" containerID="a4836d82ed6198a6ff42eeebdf325602696b7d790cfb876fb23cf281737e671f" Mar 09 19:15:51 crc kubenswrapper[4821]: I0309 19:15:51.906301 4821 scope.go:117] "RemoveContainer" containerID="9ad8f8cb25a5e57320f9803f8aea0e8eb977b4fd42e5561645b66fa71c87249a" Mar 09 19:15:51 crc kubenswrapper[4821]: I0309 19:15:51.948128 4821 scope.go:117] "RemoveContainer" containerID="3ec1f1f452fc850609ca2598615aead194489a6da0bed980239c36031f0aef18" Mar 09 19:15:51 crc kubenswrapper[4821]: I0309 19:15:51.997548 4821 scope.go:117] "RemoveContainer" containerID="e1f905b108ca545f4199903d6c7592c89e0454d2f9d302ddfd0a777cdb3ddfea" Mar 09 19:15:52 crc kubenswrapper[4821]: I0309 19:15:52.020838 4821 scope.go:117] "RemoveContainer" containerID="f97e4bf6575fc4f665b81de8ce8623d441931b3f3621ff33c7f62e93cf5ab791" Mar 09 19:15:52 crc kubenswrapper[4821]: I0309 19:15:52.038610 4821 scope.go:117] "RemoveContainer" containerID="e5cd16a5e8a50d1d2db078e67d28c5512cf1065b3ca48491398f7f6c589d9591" Mar 09 19:15:52 crc kubenswrapper[4821]: I0309 19:15:52.059433 4821 scope.go:117] "RemoveContainer" containerID="4691d310cbf12fc1c20da06742d904a91e5960476d6b8dccc42642d62e077073" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.176431 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551396-7hcgq"] Mar 09 19:16:00 crc kubenswrapper[4821]: E0309 19:16:00.177571 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="gather" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.177594 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="gather" Mar 09 19:16:00 crc kubenswrapper[4821]: E0309 19:16:00.177621 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="copy" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.177631 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="copy" Mar 09 19:16:00 crc kubenswrapper[4821]: E0309 19:16:00.177658 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" containerName="collect-profiles" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.177669 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" containerName="collect-profiles" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.177922 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc328dcf-ed65-4f2c-94ba-f4b8f5da59a3" containerName="collect-profiles" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.177954 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="copy" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.177970 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="480bea75-1d63-4af0-b2e2-b7bf9d804872" containerName="gather" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.179221 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.189671 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.189999 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.190255 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.205876 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551396-7hcgq"] Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.283104 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pxm\" (UniqueName: \"kubernetes.io/projected/d6fafc42-6e49-4d7a-b470-95bee3451a52-kube-api-access-92pxm\") pod \"auto-csr-approver-29551396-7hcgq\" (UID: \"d6fafc42-6e49-4d7a-b470-95bee3451a52\") " pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.384936 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pxm\" (UniqueName: \"kubernetes.io/projected/d6fafc42-6e49-4d7a-b470-95bee3451a52-kube-api-access-92pxm\") pod \"auto-csr-approver-29551396-7hcgq\" (UID: \"d6fafc42-6e49-4d7a-b470-95bee3451a52\") " pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.408008 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pxm\" (UniqueName: \"kubernetes.io/projected/d6fafc42-6e49-4d7a-b470-95bee3451a52-kube-api-access-92pxm\") pod \"auto-csr-approver-29551396-7hcgq\" (UID: \"d6fafc42-6e49-4d7a-b470-95bee3451a52\") " pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.502802 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:00 crc kubenswrapper[4821]: I0309 19:16:00.824355 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551396-7hcgq"] Mar 09 19:16:01 crc kubenswrapper[4821]: I0309 19:16:01.366558 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" event={"ID":"d6fafc42-6e49-4d7a-b470-95bee3451a52","Type":"ContainerStarted","Data":"1bb437ff7370b53d0b75963067458f9653e611873c1cf4fd66f9bc85a4ab6484"} Mar 09 19:16:02 crc kubenswrapper[4821]: I0309 19:16:02.373888 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" event={"ID":"d6fafc42-6e49-4d7a-b470-95bee3451a52","Type":"ContainerStarted","Data":"a80916c2409346b3c4eec402f49f9f5a280ec9bcd81d014ee87b51372a64e3b5"} Mar 09 19:16:02 crc kubenswrapper[4821]: I0309 19:16:02.389044 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" podStartSLOduration=1.170627089 podStartE2EDuration="2.389027538s" podCreationTimestamp="2026-03-09 19:16:00 +0000 UTC" firstStartedPulling="2026-03-09 19:16:00.845228673 +0000 UTC m=+3098.006604529" lastFinishedPulling="2026-03-09 19:16:02.063629102 +0000 UTC m=+3099.225004978" observedRunningTime="2026-03-09 19:16:02.385702458 +0000 UTC m=+3099.547078334" watchObservedRunningTime="2026-03-09 19:16:02.389027538 +0000 UTC m=+3099.550403394" Mar 09 19:16:03 crc kubenswrapper[4821]: I0309 19:16:03.384990 4821 generic.go:334] "Generic (PLEG): container finished" podID="d6fafc42-6e49-4d7a-b470-95bee3451a52" containerID="a80916c2409346b3c4eec402f49f9f5a280ec9bcd81d014ee87b51372a64e3b5" exitCode=0 Mar 09 19:16:03 crc kubenswrapper[4821]: I0309 19:16:03.385362 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" event={"ID":"d6fafc42-6e49-4d7a-b470-95bee3451a52","Type":"ContainerDied","Data":"a80916c2409346b3c4eec402f49f9f5a280ec9bcd81d014ee87b51372a64e3b5"} Mar 09 19:16:04 crc kubenswrapper[4821]: I0309 19:16:04.704473 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:04 crc kubenswrapper[4821]: I0309 19:16:04.758231 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92pxm\" (UniqueName: \"kubernetes.io/projected/d6fafc42-6e49-4d7a-b470-95bee3451a52-kube-api-access-92pxm\") pod \"d6fafc42-6e49-4d7a-b470-95bee3451a52\" (UID: \"d6fafc42-6e49-4d7a-b470-95bee3451a52\") " Mar 09 19:16:04 crc kubenswrapper[4821]: I0309 19:16:04.766592 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6fafc42-6e49-4d7a-b470-95bee3451a52-kube-api-access-92pxm" (OuterVolumeSpecName: "kube-api-access-92pxm") pod "d6fafc42-6e49-4d7a-b470-95bee3451a52" (UID: "d6fafc42-6e49-4d7a-b470-95bee3451a52"). InnerVolumeSpecName "kube-api-access-92pxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:16:04 crc kubenswrapper[4821]: I0309 19:16:04.860584 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92pxm\" (UniqueName: \"kubernetes.io/projected/d6fafc42-6e49-4d7a-b470-95bee3451a52-kube-api-access-92pxm\") on node \"crc\" DevicePath \"\"" Mar 09 19:16:05 crc kubenswrapper[4821]: I0309 19:16:05.404886 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" event={"ID":"d6fafc42-6e49-4d7a-b470-95bee3451a52","Type":"ContainerDied","Data":"1bb437ff7370b53d0b75963067458f9653e611873c1cf4fd66f9bc85a4ab6484"} Mar 09 19:16:05 crc kubenswrapper[4821]: I0309 19:16:05.404937 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bb437ff7370b53d0b75963067458f9653e611873c1cf4fd66f9bc85a4ab6484" Mar 09 19:16:05 crc kubenswrapper[4821]: I0309 19:16:05.405037 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551396-7hcgq" Mar 09 19:16:05 crc kubenswrapper[4821]: I0309 19:16:05.463539 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551390-ngv5v"] Mar 09 19:16:05 crc kubenswrapper[4821]: I0309 19:16:05.469568 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551390-ngv5v"] Mar 09 19:16:05 crc kubenswrapper[4821]: I0309 19:16:05.561962 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae07b7e5-6cfa-40bb-9c95-be56354dd2fd" path="/var/lib/kubelet/pods/ae07b7e5-6cfa-40bb-9c95-be56354dd2fd/volumes" Mar 09 19:16:52 crc kubenswrapper[4821]: I0309 19:16:52.264275 4821 scope.go:117] "RemoveContainer" containerID="6cac118488a7757461c279c634127f01dd2aee82aec7c309ca9b60cc10f4679f" Mar 09 19:16:52 crc kubenswrapper[4821]: I0309 19:16:52.298873 4821 scope.go:117] "RemoveContainer" containerID="c4eb77c8872c20b01869131fbaf4ef1dbc32e65ca53adb0593d22ed169c7f014" Mar 09 19:17:29 crc kubenswrapper[4821]: I0309 19:17:29.913719 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:17:29 crc kubenswrapper[4821]: I0309 19:17:29.914356 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:17:59 crc kubenswrapper[4821]: I0309 19:17:59.914506 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:17:59 crc kubenswrapper[4821]: I0309 19:17:59.915159 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.147552 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29551398-kgbrz"] Mar 09 19:18:00 crc kubenswrapper[4821]: E0309 19:18:00.147874 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6fafc42-6e49-4d7a-b470-95bee3451a52" containerName="oc" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.147909 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6fafc42-6e49-4d7a-b470-95bee3451a52" containerName="oc" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.150858 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6fafc42-6e49-4d7a-b470-95bee3451a52" containerName="oc" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.151542 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.154384 4821 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-mxq7c" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.154541 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.155144 4821 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.159065 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551398-kgbrz"] Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.309131 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7fr\" (UniqueName: \"kubernetes.io/projected/4d7fafc4-b7e4-434a-a204-647e9edf7f07-kube-api-access-mc7fr\") pod \"auto-csr-approver-29551398-kgbrz\" (UID: \"4d7fafc4-b7e4-434a-a204-647e9edf7f07\") " pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.411030 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7fr\" (UniqueName: \"kubernetes.io/projected/4d7fafc4-b7e4-434a-a204-647e9edf7f07-kube-api-access-mc7fr\") pod \"auto-csr-approver-29551398-kgbrz\" (UID: \"4d7fafc4-b7e4-434a-a204-647e9edf7f07\") " pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.439297 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7fr\" (UniqueName: \"kubernetes.io/projected/4d7fafc4-b7e4-434a-a204-647e9edf7f07-kube-api-access-mc7fr\") pod \"auto-csr-approver-29551398-kgbrz\" (UID: \"4d7fafc4-b7e4-434a-a204-647e9edf7f07\") " pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.466615 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.924189 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29551398-kgbrz"] Mar 09 19:18:00 crc kubenswrapper[4821]: I0309 19:18:00.928286 4821 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 19:18:01 crc kubenswrapper[4821]: I0309 19:18:01.421733 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" event={"ID":"4d7fafc4-b7e4-434a-a204-647e9edf7f07","Type":"ContainerStarted","Data":"2c8936e12c73e8298cf0e895e9d58a326666e678f11335b6273c6508cf51e9f1"} Mar 09 19:18:02 crc kubenswrapper[4821]: I0309 19:18:02.431122 4821 generic.go:334] "Generic (PLEG): container finished" podID="4d7fafc4-b7e4-434a-a204-647e9edf7f07" containerID="1de9165d70722c8d8b20dd5d76934e6f5d64e7f700617bc7e036d8192020eddd" exitCode=0 Mar 09 19:18:02 crc kubenswrapper[4821]: I0309 19:18:02.431163 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" event={"ID":"4d7fafc4-b7e4-434a-a204-647e9edf7f07","Type":"ContainerDied","Data":"1de9165d70722c8d8b20dd5d76934e6f5d64e7f700617bc7e036d8192020eddd"} Mar 09 19:18:03 crc kubenswrapper[4821]: I0309 19:18:03.708133 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:03 crc kubenswrapper[4821]: I0309 19:18:03.863006 4821 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc7fr\" (UniqueName: \"kubernetes.io/projected/4d7fafc4-b7e4-434a-a204-647e9edf7f07-kube-api-access-mc7fr\") pod \"4d7fafc4-b7e4-434a-a204-647e9edf7f07\" (UID: \"4d7fafc4-b7e4-434a-a204-647e9edf7f07\") " Mar 09 19:18:03 crc kubenswrapper[4821]: I0309 19:18:03.868990 4821 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7fafc4-b7e4-434a-a204-647e9edf7f07-kube-api-access-mc7fr" (OuterVolumeSpecName: "kube-api-access-mc7fr") pod "4d7fafc4-b7e4-434a-a204-647e9edf7f07" (UID: "4d7fafc4-b7e4-434a-a204-647e9edf7f07"). InnerVolumeSpecName "kube-api-access-mc7fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 19:18:03 crc kubenswrapper[4821]: I0309 19:18:03.973750 4821 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc7fr\" (UniqueName: \"kubernetes.io/projected/4d7fafc4-b7e4-434a-a204-647e9edf7f07-kube-api-access-mc7fr\") on node \"crc\" DevicePath \"\"" Mar 09 19:18:04 crc kubenswrapper[4821]: I0309 19:18:04.445489 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" event={"ID":"4d7fafc4-b7e4-434a-a204-647e9edf7f07","Type":"ContainerDied","Data":"2c8936e12c73e8298cf0e895e9d58a326666e678f11335b6273c6508cf51e9f1"} Mar 09 19:18:04 crc kubenswrapper[4821]: I0309 19:18:04.445789 4821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8936e12c73e8298cf0e895e9d58a326666e678f11335b6273c6508cf51e9f1" Mar 09 19:18:04 crc kubenswrapper[4821]: I0309 19:18:04.445563 4821 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29551398-kgbrz" Mar 09 19:18:04 crc kubenswrapper[4821]: I0309 19:18:04.838877 4821 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29551392-f2pbf"] Mar 09 19:18:04 crc kubenswrapper[4821]: I0309 19:18:04.845300 4821 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29551392-f2pbf"] Mar 09 19:18:05 crc kubenswrapper[4821]: I0309 19:18:05.564864 4821 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2f8900-837d-4b97-81c8-1ebb0f5a49bd" path="/var/lib/kubelet/pods/fe2f8900-837d-4b97-81c8-1ebb0f5a49bd/volumes" Mar 09 19:18:29 crc kubenswrapper[4821]: I0309 19:18:29.913777 4821 patch_prober.go:28] interesting pod/machine-config-daemon-kk7gs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 19:18:29 crc kubenswrapper[4821]: I0309 19:18:29.914266 4821 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 19:18:29 crc kubenswrapper[4821]: I0309 19:18:29.914304 4821 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" Mar 09 19:18:29 crc kubenswrapper[4821]: I0309 19:18:29.915026 4821 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa"} pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 09 19:18:29 crc kubenswrapper[4821]: I0309 19:18:29.915085 4821 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" containerName="machine-config-daemon" containerID="cri-o://884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" gracePeriod=600 Mar 09 19:18:30 crc kubenswrapper[4821]: E0309 19:18:30.038664 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:18:30 crc kubenswrapper[4821]: I0309 19:18:30.668884 4821 generic.go:334] "Generic (PLEG): container finished" podID="3270571a-a484-4e66-8035-f43509b58add" containerID="884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" exitCode=0 Mar 09 19:18:30 crc kubenswrapper[4821]: I0309 19:18:30.669079 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" event={"ID":"3270571a-a484-4e66-8035-f43509b58add","Type":"ContainerDied","Data":"884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa"} Mar 09 19:18:30 crc kubenswrapper[4821]: I0309 19:18:30.669254 4821 scope.go:117] "RemoveContainer" containerID="9b39979df97b20549ac7c425f2bb268de75776162f0624b872b91574e85e8541" Mar 09 19:18:30 crc kubenswrapper[4821]: I0309 19:18:30.669913 4821 scope.go:117] "RemoveContainer" containerID="884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" Mar 09 19:18:30 crc kubenswrapper[4821]: E0309 19:18:30.670188 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:18:42 crc kubenswrapper[4821]: I0309 19:18:42.551970 4821 scope.go:117] "RemoveContainer" containerID="884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" Mar 09 19:18:42 crc kubenswrapper[4821]: E0309 19:18:42.552734 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:18:52 crc kubenswrapper[4821]: I0309 19:18:52.426130 4821 scope.go:117] "RemoveContainer" containerID="14e0c11cd8b2a9b1311fc8576908b139c55fde6aa9ba421085594031ea290ce8" Mar 09 19:18:55 crc kubenswrapper[4821]: I0309 19:18:55.551746 4821 scope.go:117] "RemoveContainer" containerID="884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" Mar 09 19:18:55 crc kubenswrapper[4821]: E0309 19:18:55.552364 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:19:08 crc kubenswrapper[4821]: I0309 19:19:08.551578 4821 scope.go:117] "RemoveContainer" containerID="884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" Mar 09 19:19:08 crc kubenswrapper[4821]: E0309 19:19:08.552233 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:19:20 crc kubenswrapper[4821]: I0309 19:19:20.552724 4821 scope.go:117] "RemoveContainer" containerID="884bf1f7f8bee99eb45ff74f1aa3abfc9510408af03ea832abf6bfe89095f2fa" Mar 09 19:19:20 crc kubenswrapper[4821]: E0309 19:19:20.553608 4821 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kk7gs_openshift-machine-config-operator(3270571a-a484-4e66-8035-f43509b58add)\"" pod="openshift-machine-config-operator/machine-config-daemon-kk7gs" podUID="3270571a-a484-4e66-8035-f43509b58add" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.593615 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-np9nt"] Mar 09 19:19:26 crc kubenswrapper[4821]: E0309 19:19:26.594526 4821 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7fafc4-b7e4-434a-a204-647e9edf7f07" containerName="oc" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.594542 4821 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7fafc4-b7e4-434a-a204-647e9edf7f07" containerName="oc" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.594720 4821 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7fafc4-b7e4-434a-a204-647e9edf7f07" containerName="oc" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.596188 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.631887 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np9nt"] Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.765679 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-catalog-content\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.765794 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77mrz\" (UniqueName: \"kubernetes.io/projected/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-kube-api-access-77mrz\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.765847 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-utilities\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.793289 4821 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpmmk"] Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.798072 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.806607 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpmmk"] Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.867595 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-catalog-content\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.867673 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77mrz\" (UniqueName: \"kubernetes.io/projected/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-kube-api-access-77mrz\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.867715 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-utilities\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.868198 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-catalog-content\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.868220 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-utilities\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.893654 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77mrz\" (UniqueName: \"kubernetes.io/projected/a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024-kube-api-access-77mrz\") pod \"redhat-operators-np9nt\" (UID: \"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024\") " pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.925907 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-np9nt" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.969556 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f555a22e-5bc5-4c10-aa73-6b3679552f09-catalog-content\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.969610 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f555a22e-5bc5-4c10-aa73-6b3679552f09-utilities\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:26 crc kubenswrapper[4821]: I0309 19:19:26.969636 4821 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9b8\" (UniqueName: \"kubernetes.io/projected/f555a22e-5bc5-4c10-aa73-6b3679552f09-kube-api-access-rh9b8\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.071483 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f555a22e-5bc5-4c10-aa73-6b3679552f09-catalog-content\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.071792 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f555a22e-5bc5-4c10-aa73-6b3679552f09-utilities\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.071821 4821 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh9b8\" (UniqueName: \"kubernetes.io/projected/f555a22e-5bc5-4c10-aa73-6b3679552f09-kube-api-access-rh9b8\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.072047 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f555a22e-5bc5-4c10-aa73-6b3679552f09-catalog-content\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.072242 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f555a22e-5bc5-4c10-aa73-6b3679552f09-utilities\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.103606 4821 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh9b8\" (UniqueName: \"kubernetes.io/projected/f555a22e-5bc5-4c10-aa73-6b3679552f09-kube-api-access-rh9b8\") pod \"community-operators-vpmmk\" (UID: \"f555a22e-5bc5-4c10-aa73-6b3679552f09\") " pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.136291 4821 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpmmk" Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.345675 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-np9nt"] Mar 09 19:19:27 crc kubenswrapper[4821]: I0309 19:19:27.570189 4821 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpmmk"] Mar 09 19:19:27 crc kubenswrapper[4821]: W0309 19:19:27.583341 4821 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf555a22e_5bc5_4c10_aa73_6b3679552f09.slice/crio-0eee03d1fc8e9c92857c1717989de4697407392d3730368de84453844aeeb878 WatchSource:0}: Error finding container 0eee03d1fc8e9c92857c1717989de4697407392d3730368de84453844aeeb878: Status 404 returned error can't find the container with id 0eee03d1fc8e9c92857c1717989de4697407392d3730368de84453844aeeb878 Mar 09 19:19:28 crc kubenswrapper[4821]: I0309 19:19:28.162425 4821 generic.go:334] "Generic (PLEG): container finished" podID="a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024" containerID="7b405e800c35e1d7552b9a7e642c55c7adfeaf68a90c2ed4c84fae85e0d3edff" exitCode=0 Mar 09 19:19:28 crc kubenswrapper[4821]: I0309 19:19:28.162491 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9nt" event={"ID":"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024","Type":"ContainerDied","Data":"7b405e800c35e1d7552b9a7e642c55c7adfeaf68a90c2ed4c84fae85e0d3edff"} Mar 09 19:19:28 crc kubenswrapper[4821]: I0309 19:19:28.162514 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9nt" event={"ID":"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024","Type":"ContainerStarted","Data":"d66ab05303b7f17db5af04aae811f3e8f840b9585f0bb585fd0de1ba649d5454"} Mar 09 19:19:28 crc kubenswrapper[4821]: I0309 19:19:28.165714 4821 generic.go:334] "Generic (PLEG): container finished" podID="f555a22e-5bc5-4c10-aa73-6b3679552f09" containerID="da4023acdeb0e1d26fe273cdb15a8fad5946e4dc1e5422effa6d61e9c0ba5211" exitCode=0 Mar 09 19:19:28 crc kubenswrapper[4821]: I0309 19:19:28.165747 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpmmk" event={"ID":"f555a22e-5bc5-4c10-aa73-6b3679552f09","Type":"ContainerDied","Data":"da4023acdeb0e1d26fe273cdb15a8fad5946e4dc1e5422effa6d61e9c0ba5211"} Mar 09 19:19:28 crc kubenswrapper[4821]: I0309 19:19:28.165768 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpmmk" event={"ID":"f555a22e-5bc5-4c10-aa73-6b3679552f09","Type":"ContainerStarted","Data":"0eee03d1fc8e9c92857c1717989de4697407392d3730368de84453844aeeb878"} Mar 09 19:19:29 crc kubenswrapper[4821]: I0309 19:19:29.179719 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpmmk" event={"ID":"f555a22e-5bc5-4c10-aa73-6b3679552f09","Type":"ContainerStarted","Data":"77a1205efe9fc115bc7263403b9237fd0eca1ed5770aec0bb04a812f8e5b0cc2"} Mar 09 19:19:30 crc kubenswrapper[4821]: I0309 19:19:30.189437 4821 generic.go:334] "Generic (PLEG): container finished" podID="f555a22e-5bc5-4c10-aa73-6b3679552f09" containerID="77a1205efe9fc115bc7263403b9237fd0eca1ed5770aec0bb04a812f8e5b0cc2" exitCode=0 Mar 09 19:19:30 crc kubenswrapper[4821]: I0309 19:19:30.189572 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpmmk" event={"ID":"f555a22e-5bc5-4c10-aa73-6b3679552f09","Type":"ContainerDied","Data":"77a1205efe9fc115bc7263403b9237fd0eca1ed5770aec0bb04a812f8e5b0cc2"} Mar 09 19:19:30 crc kubenswrapper[4821]: I0309 19:19:30.192050 4821 generic.go:334] "Generic (PLEG): container finished" podID="a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024" containerID="4a407b929c23102bb3055abae1d89ef4fdfe2fd89ef3467127274473ecb16e1e" exitCode=0 Mar 09 19:19:30 crc kubenswrapper[4821]: I0309 19:19:30.192107 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9nt" event={"ID":"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024","Type":"ContainerDied","Data":"4a407b929c23102bb3055abae1d89ef4fdfe2fd89ef3467127274473ecb16e1e"} Mar 09 19:19:31 crc kubenswrapper[4821]: I0309 19:19:31.202849 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-np9nt" event={"ID":"a31d5e26-8c55-4cfe-9c44-b2d3c5ba9024","Type":"ContainerStarted","Data":"110e7cdf6693a3750884f718f1f2b2486878968001870eec4f02e8467f2dd1d9"} Mar 09 19:19:31 crc kubenswrapper[4821]: I0309 19:19:31.206453 4821 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpmmk" event={"ID":"f555a22e-5bc5-4c10-aa73-6b3679552f09","Type":"ContainerStarted","Data":"c6fc5075bf5265cb8ef7cf09ba3cf89db8706d9ac5c225e545a0e40ea0743b1e"} Mar 09 19:19:31 crc kubenswrapper[4821]: I0309 19:19:31.235287 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-np9nt" podStartSLOduration=2.779897546 podStartE2EDuration="5.235258925s" podCreationTimestamp="2026-03-09 19:19:26 +0000 UTC" firstStartedPulling="2026-03-09 19:19:28.163807858 +0000 UTC m=+3305.325183714" lastFinishedPulling="2026-03-09 19:19:30.619169217 +0000 UTC m=+3307.780545093" observedRunningTime="2026-03-09 19:19:31.231936485 +0000 UTC m=+3308.393312361" watchObservedRunningTime="2026-03-09 19:19:31.235258925 +0000 UTC m=+3308.396634821" Mar 09 19:19:31 crc kubenswrapper[4821]: I0309 19:19:31.248426 4821 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpmmk" podStartSLOduration=2.833561281 podStartE2EDuration="5.248398381s" podCreationTimestamp="2026-03-09 19:19:26 +0000 UTC" firstStartedPulling="2026-03-09 19:19:28.166884552 +0000 UTC m=+3305.328260408" lastFinishedPulling="2026-03-09 19:19:30.581721642 +0000 UTC m=+3307.743097508" observedRunningTime="2026-03-09 19:19:31.247766734 +0000 UTC m=+3308.409142610" watchObservedRunningTime="2026-03-09 19:19:31.248398381 +0000 UTC m=+3308.409774267" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515153616517024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015153616520017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015153607640016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015153607640015464 5ustar corecore